Making good
Insights on AI and public good from diverse UK communities
26 March 2025
Reading time: 168 minutes
The community researchers were fundamental to shaping the research through co-design, participatory practice and the co-analysis that underpinned the findings presented here. The inputs of Anita Kambo, Natoya Whyte, Rae Turpin, Claudia Murg, Paula Quigley and Patrick Toland collectively and individually are recognised as co-authorship.
Executive summary
This report is – as far as we know – the first study to look at the role of place and community in relation to people’s expectations of AI. It sets out how three communities of diverse publics think about ‘public good’ and its relationship to AI. It is designed to complement existing empirical quantitative and qualitative studies by understanding how people from diverse communities in the UK are currently encountering the realities of the unfolding ‘AI revolution’. It explored what people feel the opportunities of AI could be for them, and where they see tensions in relation to their experiences of existing products and services, and future vulnerabilities to technological harms.
Through this deliberative engagement with people in Belfast, Brixton and Southampton, we have been able to show something of where the views or assumptions embedded into UK policy or AI research and development currently diverge from public expectations and hopes. Seen from a policy or research and development perspective, AI for public good can mean a range of things, from a suite of civic tech tools to justification for a funding package for AI deployment at scale.
The diverse publics in this study recognise the importance of AI to the UK economy. However, for them, economic growth through technological innovation is just one component of what public good means in relation to AI. They are not convinced that public good will be created by any programme that looks at social problems through the lens of AI-driven technical solutions.
For these people, AI for public good is about the bigger picture, and taking a holistic and interconnected approach across policy, to ensure that AI makes life better, fairer and good for everyone. Getting it right requires deliberate actions to support values-based governance across government, public services, civil society and industry.
It will be vital to address any prospect of public backlash or loss of public trust – which is one of the biggest risks the UK government itself identifies in the delicate balance of creating sufficient safeguards for emerging AI technologies, while enabling space for market-driven innovation.[1] Global examples of when developments have been misaligned with public expectations – from the controversy surrounding the use and unequal impacts of the UK Ofqual algorithm,[2] to the discriminatory effects of the Netherlands’ use of an algorithm to detect benefit fraud[3] – attest to the validity of this concern.
Closing this gap between policy ambition and publics’ expectations is important, so that technology can develop in alignment, rather than to an agenda driven by economically pressured policymakers and commercially driven technology companies. Evidence about publics’ views and expectations can help to bridge these differences.
In presenting the perspectives and lived experience of these people, we are able to signpost where and why public confidence in the current directions of AI policy is low and where trust with publics should be better established. We are also amplifying the creative, generative suggestions that publics have for AI-supported futures and conveying their forceful case for consensual and public-centred policy-making around AI in the UK.
The report presents evidence that centres participant voices, through description or verbatim quotes. The first two sections focus on how participants made sense of public good and AI, respectively. Section three draws out how they saw the relationship between these two ideas, and the final section sets out what people expect to see from AI policy and development, for it to support public good in the longer term.
The ability to connect with communities and people who are often underrepresented in AI research was made possible through the work of the community researchers – experts in their own local contexts, who provided physical and relational spaces in which the research took place. Some of the verbatim quotes reflect the views and interpretations of community researchers, who we identify with their names and location. Some represent words expressed by participants; their anonymity has been protected, but we recognise their location in most instances, where possible and where relevant.
Throughout the report, we spotlight specific areas of discussion that participants felt were important, such as the use of facial recognition technologies in policing, or AI in social care. These have been selected to show where sentiments around AI and public good were particularly strong, unexpected or contested. They also enable some explanation of why people feel the way they do – highlighting the intersectionalities in lived experience that a place-based approach helps to bring into view.
The report culminates by setting out four headline expectations regarding AI use and deployment, put together by Ada researchers based on the evidence of participant views and developed with participants in a final workshop (March 2025). These provide indications of where publics are open to AI being used for good in their own lives and the wider communities, whose needs they considered with care. And where they feel AI and public good are in tension, and more consideration is needed before AI emerges into people’s lives in the here and now.
Summary of our insights
AI for public good isn’t ‘one thing’ – for publics, it’s not a programme, initiative or a set of technologies
Public good invites a diverse set of interpretations, but there are core ideas and meanings that people from different backgrounds, politics and life experience hold in common: the importance of fairness and equity, social connection and community, and the structural support and provision of services that allow people to live meaningful and purposeful lives.
The diverse publics that engaged with our research believed AI for public good meant a commitment to centre values. When people considered public good, it was an inherently moral and values-based concept. They did not redraw these moral boundaries for public good when considering AI’s potential uses in society. Rather, they thought AI should harmonise and build on public good not conflict with it. Particularly, they wanted AI to benefit everyone and protect the most vulnerable. They believe that, for AI to work for public good requires action and values-based governance across public services, civil society and industry.
Publics expect AI design, deployment and policy to accommodate pluralism and diversity for these technologies to work in the public good – they know that there isn’t a singular ‘AI revolution’ that is going to work for everyone. When thinking about public good, people expected that some groups, communities or individuals may have different needs, views, expectations and vulnerabilities. They wanted these differences and requirements to be taken into account in AI design, deployment and policy, so everyone can benefit from AI and no one is harmed.
‘Place’ matters to how people thought about AI’s opportunities and availability and how they wanted to make decisions. We should expect publics across the UK to reflect devolved, local or regional expectations and political cultures, which will be a critical component of how people both encounter and make decisions about AI. Research exploring people’s attitudes to and expectations of AI should take the effect of place into account – not just geography but also locality and community.
Communities want more autonomy and control in how AI manifests in their lives – we need better or more diverse models of what devolved choice could mean in AI deployment.
Publics expect the relationship between public good and AI to be managed so that it is:
- Pro-social and equitable: public and person-centred, and supporting of an individual’s talents and abilities.
- Relational and ethical: AI should care for human and community needs.
- Future-focused and ambitious: AI should advance humanity’s needs, recognising children and future generations.
- Responsibly deployed: used considerately, and only where necessary and effective.
The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC. Support for the Ada Lovelace Institute’s work on the deliberative enquiry was provided by BRAID. BRAID is funded by AHRC.
Public Voices in AI was a collaboration between: the ESRC Digital Good Network at the University of Sheffield, Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute, and University College London.
Other publications produced through this programme include a nationally representative public attitudes survey,[4] which provides complementary quantitative insights.
How to read this report
For local and national policy and decision-makers: This work is intended to complement insights about public attitudes to AI from quantitative surveys, such as the Ada-Turing AI attitudes survey. It demonstrates the rich and actionable information that can be drawn from publics in communities, and gives indications of public views that can enable a technology regulatory landscape that is in step with public expectations. This will contribute to supporting positive policy decision-making and mitigate the risks of pursuing approaches and policies in relation to AI that do not align with public trust. See in particular:
- Executive summary
- ‘Public good and AI’, particularly the section on ‘AI for public good is not “one thing”’
- ‘What do publics expect to see?’
For public-interest AI developers: This work exposes the public’s current discomfort with some aspects of corporate technology development and behaviours. It also provides insights into the public’s own sense of what public good means for them, and how it could be operationalised into AI systems. See in particular:
For people and communities: This work represents the experiences, hopes, concerns and views of 47 people living in Belfast, Brixton and Southampton in 2024. These people are a range of ages, genders, ethnic backgrounds and awareness and experience in AI technologies. They are all – to some extent – engaged in making sense of their longstanding feelings about public good in relation to emerging AI technologies. Even though these technologies are mostly not in use in their communities, they already have strong senses of what is and isn’t acceptable to them, and supportive of the societies they want to live in – and want their children and future generations to live in – in the future. See in particular:
- ‘A place-based enquiry: what’s good for (diverse) communities?’
- ‘Public good and AI’, particularly the section on ‘AI for public good is not “one thing”’
- ‘What do publics expect to see?’
For public participation practitioners: This study is the first of its kind to look at the role of place and community in relation to people’s expectations of AI. It demonstrates the use of a methodology of a deliberative enquiry to explore people’s experiences, hopes, concerns and views. See in particular:
- ‘Methodology: our approach to the research’
- Description of the process of sense making in ‘Sense-making’
- Description of the concept of emergence in the chapter ‘Making sense of AI’
- ‘Portraits of people and places’: Reporting on place in relation to Belfast, Brixton and Southampton
For academics: This study builds on theories of public benefit, commons and good – as well as sense-making and emergence – to inform a study of the role of place and community in relation to people’s expectations of AI. It makes an important methodological contribution, as well as contributing to the evidence base of publics’ views on the relationship between public good and AI, with a specific relationship to place. See in particular:
- ‘“Public good AI”: in search of a legitimate definition’
- Description of the process of sense making in ‘Sense-making’
- Description of the concept of emergence in the chapter ‘Making sense of AI’
Introduction
Context: The AI revolution
Some say we are living in the time of an ‘AI revolution’. And that it is predominantly a technological revolution that manifests in a proliferation of AI-enabled tools and consumer products. If its effects are as wide-reaching as the term ‘revolution’ suggests, it will affect how we experience everything, from work, to healthcare, to education, to mental-health support, to relationships.
That the AI revolution can bring rapid societal benefits as well as economic growth has become an article of faith for leaders and decision-makers in the UK government, as well as the global tech industry. The UK government recently announced its commitment to ‘shape the AI revolution’ at both a domestic and global level through the AI Opportunities Action Plan.[5] Meanwhile, technology companies cast the AI revolution as the key to humanity’s survival, provided we can circumvent its existential risk, invoking a vision of technologies seamlessly integrated into societies.[6] [7]
Despite attempts to rein in the rhetoric and frame AI as part of a longer technological change,[8] the discourse of the AI revolution remains one of the most powerful ideas driving UK policy and industry research and development today. At its core is a contested set of ideas about the necessity and inevitability of technology-driven innovation, which mobilises a constellation of different advocates and cautioners in policy, regulation, industry and civil society.
These groups have very different ideas about what this revolution entails and how it should be achieved, such as: the (global) role of regulation and governments; the range and spread of societal applications, benefits and dangers; and the degree to which ‘agentic AI’ should be accepted and adopted. But in the current global governance environment, the power of technology companies to decide the course of the AI revolution seems assured.
In response to the discourses of the AI revolution and its unfolding realities, participatory input and public voice matter more than ever. A substantial body of research has evidenced that public input into policy is intrinsic to building public trust and confidence in decisions about AI.[9] Scholars involved in AI and society research have repeatedly advocated for building evidence of diverse public expectations and interests to ensure that the benefits of AI are shared across society and reduce harm.[10] Yet even as this body of scholarship has grown, the emphasis on public input, which caused considerable optimism that the ‘deliberative wave’ would help shape AI development, has taken a back seat in policy language.[11]
So, nearly a decade on from the first articulations of UK AI policy (2016),[12] we still lack a shared vision for AI and the ‘good society’ – an articulation of an agreed set of ‘good’ outcomes to steer AI investment, development, deployment and adoption.[13] The need for this has not disappeared, nor has the imperative to involve people in deciding these directions. Indeed, if the AI revolution is as powerful and as consequential as the rhetoric surrounding AI delivery suggests, then who defines the agenda for ‘public good’ is one of the most important questions anyone can ask.[14]
Ensuring these concepts reflect and have meaning for diverse publics is a crucial component of securing public legitimacy for innovation.[15] The UK Secretary of State for Science, Innovation and Technology recently told Parliament: ‘Trust is incredibly important in this whole agenda. We have seen too many times in the past where a fearful public have failed to fully grasp the potential for innovation coming out of the scientific community in this country. We are not going to make that mistake. We understand from the outset that to take the public with us we must inspire confidence.’[16] As the UK government iterates its roadmap for AI development, there will be no bigger roadblock to AI’s transformative potential than a failure in public confidence.[17]
This research set out to build some foundations for a truly public agenda for AI, by asking people from three communities to set out in their own words what public good means to them, and spending time with them to learn how AI interfaces with their lives, values, ambitions and hopes for the future. The aim is to understand better what these people think is needed from local and national government, industry and civil society to create AI that reflects and supports their public good.
‘Public good AI’: in search of a legitimate definition
Existing models of public good
The need to balance (potentially competing) power dynamics between corporate, state and public interests in the AI revolution has long been recognised, and there are also longstanding debates in academia, policy and the technology sector about how to do this. There are many competing models, some of which repurpose or reinvent historically and institutionally situated concepts for the AI ecosystem – such as ‘public interest AI’,[18] [19] ‘public benefit AI’,[20] ‘AI4people’[21] or ‘AI for social good’ (AI4SG).[22] These are proposed as being able to balance AI towards positive societal outcomes and might include, for example, operationalising values of equity or fairness, co-designing governance processes or developing collaborative enterprises or social initiatives between tech companies, AI developers and researchers.[23]
Within these, there are different models for governmental power and private investment. Public interest AI is moving into prominence in government and philanthropic initiatives,[24] where it is framed as a potential solution to pressing social problems across various domains. The French government’s announcement that public interest projects around AI would be funded through a €400 million endowment to a public-private-philanthropic partnership (Current AI) is a new and substantial development in how relationships between private technology companies, corporate interests and government converge in public programmes for AI.[25] By contrast, AI4SG (AI for social good) speaks more to endeavours that tech companies can enact to build deeper relationships with communities or grassroots initiatives through public or industry funding, which often envisage participatory input from publics in different ways.[26]
These initiatives can struggle to achieve their objectives.
There is considerable evidence that technology companies in the wider ecosystem have not yet managed to square competing incentives or devised a legitimate or consistent approach to incorporating public voice in decision-making.[27]
Despite a substantial oversight apparatus put in place around the French government’s programme, there is a risk that this, too, will fail to address underlying power imbalances and dominant assumptions towards positive technological progress sufficiently.[28] There are well-founded concerns that programmes for social good can in reality detract from societal needs or undermine pro-social aims.[29] [30]
Contested definitions and understandings
There is no clear evidence that public good offers a robust conceptual route to balance these interests or incorporate publics into decision making, especially when invoked rhetorically in the delivery of funded programmes. In discourses around AI, public good has featured increasingly as a basis for policy-making – most recently, for instance, in the announcements surrounding the UK’s AI Opportunities Action Plan,[31] which promises to ‘harness the power of AI for the public good’.[32] The potential for delivery of this promise is undermined by the fact that there is not one agreed conception of either social or public good.[33] It is a contested term that is used in relation to differing social, political and economic ideas about benefit, ownership, access, community, contribution, inclusion, value, state provision, participation, rights and privileges.
This means that public good cannot be understood as a static reality – it is grounded in work of philosophers and political theorists, brought into being by policymakers and publics, and reshaped through democratic deliberation and negotiation. This means that calling an initiative good, beneficial or in the public interest does not make it so.
While some initiatives produce positive outcomes, there is a recognised need to locate the legitimacy of these enterprises not in an abstract idea of ‘good’, but in the views, concerns, hopes and expectations of publics – and in the contexts in which technologies are deployed.[34] [35]
Across academia, social research and the third sector, there is a growing interest in fostering debate and discussion around what public good means for UK society in relation to AI,[36] building on evidence that definitional work with diverse publics can help create more public-centric policies around data and AI. Deliberative work on publics’ understanding of public interest, public benefit, and public good for data has led to procedural changes in various institutional contexts and helped to embed public perspectives in the way that data is actioned and mobilised.[37] [38] [39] Participatory work with publics over ideas of public good has also helped to strengthen institutional stewardship and nurture legitimacy for public statistics.[40]
This work intersects with a wider and substantial effort to comprehend the range and diversity of people’s views about AI, which can support understanding of what publics believe is most beneficial or concerning about AI. Public attitudes research from a range of disciplines surfaces some consistencies in expectations around AI. This includes optimism around the use of AI in advancing science and some aspects of healthcare, concerns around automated decision-making that affects people’s lives, and a strong belief that regulation is needed.[41] In fact, the most recent wave (2025) of the Ada Lovelace Institute and Alan Turing Institute national survey of UK attitudes to AI, referred to in this report as the Ada-Turing AI Attitudes (2025) survey, reveals that desire for laws and regulation has increased in the last two years.[42]
Attitudes work alone cannot help to make clear the multiple possibilities that public good and AI may present to publics. There are consistent limitations in the range of evidence available on public attitudes in this respect: an overarching Western-centricism,[43] and lack of inclusivity of minoritised or excluded groups,[44] despite evidence to suggest some applications of AI may have disproportionate negative impacts on people from these groups.[45] [46]
These limitations can sometimes lead to narratives around AI that do not help decision-makers understand what the public think or want: an example is the prevalence of narratives around public ambivalence towards AI, when in reality some publics have strong views about specific applications of AI that are positive, negative, contradictory or multifaceted.[47] Often approaching lived experience through the lens of measuring ‘awareness’ or ‘adoption’, survey work can buttress an overarching technological determinism, which leaves little space for scoping what publics really want or need.[48]
A place-based enquiry: what’s good for (diverse) communities?
This research has used a place-based approach to exploring public good, because we argue that geography is an important delineating factor in how publics are located in the AI revolution and how much power they will have to decide its course. Geography is an acknowledged determinant of inequalities, such as social health, because different power, economic and political structures coordinate in different ways on the ground.[49]
These coinciding factors of geography and structural inequalities will have an intrinsic effect on how people can benefit (or not) from AI, and whether and how people’s voices are heard in AI research and policy. A place-based framing is therefore an important consideration for widening participation and access, and informing AI research with evidence of the needs and expectations from diverse people across the UK.
The UK experiences extensive geographical inequalities, which manifest as place-based differences in income, productivity, opportunities and health outcomes.[50] These disparities interact with digital systems and infrastructure in places across the UK, which Ada’s previous work on access to digital health has shown.[51]
Publics perceive and experience geography as a fundamental constituent of inequalities.[52] For example, the geographic differences in outlooks and the strength of localised grievances regarding immigration were clearly visible in the riots that erupted after the 2024 Southport murders, just a few weeks before fieldwork commenced.[53] We can expect, in this context of historic geographic inequalities, that people who live in different locations and situations in the UK may hold different ideas about what AI opportunities mean for them.
A place-based framing therefore invites us to explore how far geographic disparities may shape how people in different communities access – or expect to be able to access – AI’s opportunities, or whether they are – or see themselves as – excluded from its benefits. It may also help to pose a bigger set of questions about, for example, the role of local and regional identity, collective memory or place-based political cultures in both understanding and negotiating AI’s deployment.
Despite the focus on place in UK policy, evident in programmes such as ‘Get Britain Working’[54] and ‘Levelling Up’,[55] consideration of local capabilities and need are not yet forefronted in AI policy. However, the UK government’s focus on economic growth and innovation, which is driving much of AI policy, is also reflected in devolution plans, such as the regionalisation of planning powers and the emphasis on redistribution of power.[56] Place is going to be increasingly important in both how people engage with AI, but also how they negotiate issues that arise from AI deployment.
Devolving more power for – for example – the delivery of public services to local or regional authorities will give them greater influence over how AI intersects with people’s lives. But this will require negotiation: people in the UK may trust local governments slightly more than national government,[57] but experience of services, levels of civic engagement and awareness are unevenly distributed across the country.[58] [59] In the context of the current funding crisis, local bodies may not have capacity to build routes for co-production around AI, or to strengthen already strained civic relationships.[60] In fact, relocating planning powers to local authorities may make it harder for some local communities to contest the construction of AI infrastructure, such as data centres, in their neighbourhoods, as mayors will have increased powers to override local planning decisions.[61]
Recognising the unique ways in which structural, systemic and economic factors combine in different geographies is key to building ethical approaches to finding solutions and mitigations for place-based inequalities.[62] In relation to AI particularly, place-based community models of governance and participatory engagement (sometimes grouped under the umbrella of ‘AI localism’) are seen as an innovative, flourishing and important contribution to the overall ecosystem.[63] This research argues that place is therefore a critical factor in how diverse UK publics will be able to engage with AI, and that examining and building evidence around place will help us unlock pathways towards good AI uses and policies.
Methodology: our approach to the research
Public good and AI
In its ambition to surface more relational or expansive perspectives on AI, this new deliberative enquiry speaks to various existing research programmes and initiatives. It is allied to work that has been underway to test what ‘possibilities, principles, processes and practices’ might help realise ‘what makes a good digital society?’.[64] And it contributes to wider and more recent deliberative efforts, which are currently creating interfaces for communication between publics and decision-makers, such as the Royal Academy of Engineering’s People’s Stewardship Summit[65] or the Children’s AI Summit.[66]
Complementing these studies, and responding to the ambiguity in the current AI policy discourse around the term ‘public good’, our investigation began with a simple question:
‘What does public good mean to diverse publics when it comes to AI?’
By engaging publics themselves to respond to this question, we respond also to work within the AI4SG field that has emphasised that directions for ‘good’ must be led by communities, who should decide ‘whether and how they would like to use AI’ as a basis for evolving an ‘AI for social good’ framework.[67]
Our investigation, however, has tried to give publics substantial opportunities to centre their own ideas of a good society and make it easier to contest or reject the use of AI in relation to those tenets. Through de-centring the technology, and re-centring people, we aimed to build empirical evidence to understand people’s priorities through a capabilities-based approach, which recognises community strengths to rebalance the power asymmetries that are embedded into AI policy and infrastructure.[68]
Place-based case studies
This research aimed to build some foundations for the consideration of ‘place’ during a time of wider systemic changes in governance, AI delivery and economic policy. It does this through adapting a place-based case study methodology, which is well suited to understanding how interweaving and complex social challenges materialise in people’s daily lives.[69] By engaging diverse publics in Belfast, Brixton and Southampton, we hoped to trace the possible conceptual range, and potential points of divergence, in people’s understanding of public good. This formed a basis for deeper understanding of the interlocking nature of place and inequalities, and how these may be represented in perspectives on AI.
A place-based case studies approach gave us the methodological richness and opportunity to do this, by allowing us to bring techniques together to cast a ‘360 view’ on how people understood the interactions between AI and their lives. This primarily consisted of assembling deliberative and community-led research methods. AI is acknowledged as a complex and ‘wicked’ problem, which is embedded in, and related, to wider societal and structural challenges.[70] Adaptations of deliberative communications theory emphasise the importance of both translating these issues into familiar social and cultural milieu, and spending time to understand how people develop these views ‘behind the scenes’ of one-off deliberative workshops or events.[71]
We chose three communities (case studies) as locales for this research and engaged pairs of community researchers in each site to collaborate with us in codesigning a loose deliberative process. This aimed to deepen understanding of how the interlocking nature of place and inequalities in these areas may convene perspectives on AI. Belfast, Brixton and Southampton – while not recognised as the most deprived areas in the UK, either in the Index of Multiple Deprivation (2019)[72] or in media discourse,[73] – are each locating points for various structural inequalities, and provide gateways into diverse communities and lived experiences of exclusion.
Each area has faced common challenges in recent years: extreme poverty and low social mobility from long-term, structural and historic economic exclusion,[74] worsened by ‘austerity’ (under the UK’s coalition government 2010–15), and more recently the cost-of-living crisis that followed Brexit and Russia’s invasion of Ukraine.[75] These areas suffer from high pollution from port industry or heavy traffic, which has disproportionately affected minoritised and low-income households.[76] They have all witnessed heightened tension from social dislocation and alienation, which have manifested differently in each place, whether in knife crime or paramilitary violence.
In each of these communities, social cohesion and community relationships have been severely tested in recent years. Immigration has proved a flash point for racism and far-right action in Belfast[77] and Southampton;[78] in Brixton, the community action mobilised in the summer of 2024 demonstrates that residents know all too well the experience of riots and violence on the streets, as well as the injustices that lie at their roots.[79]
We worked closely with community researchers to design a process where we became more familiar with participants’ lives, their outlooks, and priorities for public good, and how they thought AI might relate to these ambitions. We know from related work on the sociologies of data and data practices that people’s knowledge of esoteric concepts such as ‘data’ is created through mundane and everyday engagements and interactions.[80] [81] [82] When people mobilise their views about data and data practices, they constitute this knowledge through feelings, imaginings and speculative fictions (‘what ifs’).[83] [84] We therefore balanced a series of structured, arts-based engagement events with more ethnographic approaches that could capture the subjective and contextual nature of meaning-making (interviews, note taking, observations and participant-led inputs).
People and places: who was included in the research
Within each of these ‘place-based’ approaches, there were multiple communities that could be invited into the research. We asked the community researchers to draw together a diverse and inclusive group of people in each place, which reflected important demographic realities, underserved groups and perspectives in the local area. We agreed on six criteria, where participants could self-identify in relation to different forms of exclusion, as part of an application process across all sites – to ensure an inclusive and diverse approach – asking people to self-select from one or more categories: poverty, disability, ethnic identity, citizenship, gender or sexual identity, or (more broadly) being ‘part of a community not listened to by people in power’.
Through operating a community-led recruitment, we gathered diverse cohorts that spoke to experiences usually omitted from AI research. We had a very wide range of minority ethnic representation, for instance, because of the community basis of the approach. The energy and commitment of the community researchers to the equitable principles of our recruitment also ensured inclusivity across other core excluded groups. Some 26% of people self-identified as having a disability, for instance. We also saw small but significant (15%) LGTBQIA+ representation in the workshops. The strength of connections between Belfast community researchers and their local voluntary sector meant that we achieved significant representation of people who have been excluded based on citizenship (19%).
We could not resolve all participatory inequalities through a community-led approach. We did not seek to be representative or to use methods like sortition to ensure a range of characteristics. We struggled to recruit men from lower socio-economic backgrounds, which led to more female-dominant groups in Belfast and Brixton. There were particular concentrations and absences of perspectives: Belfast’s group, on the one hand, congregated people with experiences of migration that other sites did not represent as strongly, but it was also the only site with no declared LGTBQIA+ representation. The ages of participants were concentrated mainly in the 25–54 age groups, although each site had some representation of people under 25 and some had representation above the age of 55.
We aimed to gather groups of people that were reflective of some of the core social and cultural dynamics of these local societies. The people we congregated reflected diverse sections of their community and were each differently placed within the local nexus of structural forces. They each came with their own senses of place – which were global as well as hyperlocal – as well as differing relationships with the community they lived in and what that meant to them.[85] But during the project, through meeting each other and engaging with each other’s views, these multiple and diverse viewpoints coalesced in a sense of themselves as a ‘temporary community’, which was profoundly but uniquely connected to both people and place.[86]
In Brixton, many participants came from the borough’s longstanding Black communities and its Windrush generations; but there were others who spoke to the borough’s multiculturalism, bringing related meanings of Blackness (Black American), or experiences of ethnic minoritisation (Muslim, Eastern European). Some were long term residents, but there were more recent arrivals, too, testifying to Brixton’s high social flux. Two participants came with experience of homelessness; it has been increasingly difficult for long-term locals to afford to stay in the borough, as gentrification hikes rents and the cost of living. And there are a good number of people who advocate for others who are vulnerable in this climate.
In Belfast, most participants learned about the project through the networks of the city’s voluntary sector, a sector facing increasing pressures through rising costs and needs since the COVID-19 pandemic.[87] Some of our participants worked, or had worked until recently, in this sector, supporting children’s mental health or homeless people. Others volunteered for local advocacy groups, campaigning for fairer rent or better support for neurodiversity. Nearly half of participants had connections with charities supporting refugees and migrants to settle in Northern Ireland. Immigration has become an increasingly prominent, but contested, reality of Belfast’s society in recent years. In this, the group distilled unique place-based realities for Northern Ireland.[88]
Southampton, too, has a variegated voluntary and community life and participants came from across the ‘many villages’ of the city’s activist, creative, voluntary and advocacy lives, as well as different parts of its economy. Some were artists, musicians and creatives, sometimes choosing to live their life outside social norms – and balanced their passion for creativity or community life with various strands of precarious income. Others worked in public services or in drug and alcohol rehabilitation or studied at the city’s universities. Two members were ex-military and had learned about the project through the city’s veterans’ networks.[89]
Helping others, and the critical importance of community, were things that participants all held in common, irrespective of place, even if they realised this in different ways. There were other intersecting points in lived experience across our sites, such as the experience of poverty (33% declared they shared this) and the realities of balancing various strands of precarious income. Some people were in full-time work; others had recently lost work, or – with the burdens of caring or managing disabilities – had been unable to meet full-time job commitments for some time. Two participants could not work at all due to the status of their migration settlement application.
Sense-making
We have described this methodology as an ‘enquiry’ because that term references the longer timeframe of the research process and the different routes for dialogue-based communication, listening and feedback we established. Through these, we could see the various processes of signifying and crystallising that we saw participants engage in around the related concepts of public good and AI. We use the concept of ‘sense-making’ to describe these, because it recognises that meaning-making takes place over time, in social contexts, and that individuals draw on various resources (cultural, social and personal) to make sense of unfolding and dynamic contexts, and decide ‘what’s the story’ and how to respond to it.[90]
Qualitative research provides various tools to understand how people make sense in social settings, and how and why they engage cultural templates and shared understandings, in order to enter into negotiation about the significance of a topic.[91] Exploring perceptions and outlooks to this depth, we argue, is crucial for understanding how diverse publics feel about AI tools, systems and infrastructure, and this may shape how views are expressed in, for instance, the attitudinal surveys that are more commonly used to measure public sentiment.
Our enquiry method used various means to create a rounded view of how participants made sense of public good and AI. This meaning-making was not uniformly dispersed or one-directional: people drew from our learning programme differently, depending on their levels of familiarity with AI, their wider needs, differing expectations and lived experience. Some (not all) told us they benefited from each other’s technical, practical or social knowledge and used this to crystallise their thoughts. They drew in resources from outside the workshops to catalyse this process: social intelligence from their family or friends, their social media networks and trusted media.
We have not captured the totality of their sense-making; we see this research as a foundation for further negotiation on the part of publics, researchers and policy-makers, where other views and perspectives will come into view. Sense-making is an ongoing process. As one participant commented when reflecting on her learning about AI, human intelligence is organic – ‘we are also our emotions, our hormones, our senses, we see, we hear, we touch, we feel’ and a ‘lot of input is changing daily and constantly’. Our programme came to an end in December 2024, but participants are still engaged with this process of making meaning in what is an ever-evolving and shifting environment.
For the purposes of this study, understanding people’s sense-making has allowed us to point to what might underpin people’s attitudes towards AI. It also enables us to identify the importance of their relational dynamics with AI, building on an array of sociological research that emphasises the formative role of the ‘everyday’ in shaping people’s views.[92] Through observing participants’ responses during the research, we have been able to identify and investigate, in the section ‘Making sense of AI’, how the wider context of ‘emergence’[93] – a state of feeling on the verge of radical change – is formative in shaping the views of diverse people in the UK and creating a climate of uncertainty, around which a lack of confidence, insecurity and distrust are increasingly likely to coalesce.
Context of the enquiry
All research occurs alongside political and social climates that shape conversations and findings, as well as participants’ emotional engagement with the subject of the research. Because some of the participant comments and discussions represented in this research refer to contemporary events, we offer this timeline of major events around which workshops took place.
The recruitment of community researchers began in mid-September 2024, some six weeks after the Southport murders, and the riots and protests that took place in its aftermath.[94] Questions and concerns about social cohesion featured prominently in our interviews with applicants during interviews, and was referenced by a few participants during the workshop process.
The fieldwork phase, including participant recruitment, onboarding and the start of the workshops, coincided with important geopolitical developments. The re-election of President Donald Trump on 6 November 2024, and the growing presence and intervention of US technology entrepreneur Elon Musk in UK public discourse had a significant impact on domains at various points in the process.[95] Similarly, conversations responded to some of the events in the continuing war in Gaza and its deepening humanitarian crisis.
The workshops also operated under particular political weathers. Belfast’s first workshop, for instance, took place the day after polls opened for the Irish election. As the votes were counted, a process that would take three days, participants checked their phones for the latest news and analysis, waiting to see if a female Sinn Fein leader would win out on both sides of the border.
It is worth noting, too, that participants’ sense-making occurred during a particular moment in the AI ecosystem. Many referenced live news stories about OpenAI’s o1 model and its potential for strategic deception,[96] as well as other references to new tools and technologies, which positioned how they made sense of AI, and related to the context of ‘emergence’. The climates of ambiguity and uncertainty reflect feelings before January 2025, when the announcement of the large language model DeepSeek, developed in China, sent shocks across the global markets. In the findings workshop (March 2025) some people referenced continuingly shifting technological and politico-technical realities, such as President Trump’s AI showreel of Gaza.
The following case studies will enable readers to spend time getting to know the three places that locate participants and the ‘temporary communities’ they formed during the research period. Understanding how these communities view their own places provides an important background to their motivations and the dynamics they created together during this project. This context allows us to better understand the views and outlooks they held in relation to public good and AI technologies. As well as describing the overarching demographic and cultural constituencies of these workshop groups, we also offer short community portraits of people and places, created with the community researchers, which present these temporary communities in the ways we encountered them.
Portraits of people and places
The evidence we generated about ‘public good’ is grounded in the life stories and perspectives of the people that took part in our workshops, as well as the dynamics they created when they interacted and worked together. We have worked with our community researchers to translate these, so the evidence can be understood in reference to these perspectives and relationships. These portraits of people and places are written using community researcher voices and input; they help to contextualise and humanise the views that are presented in the chapters ahead, while acknowledging that they are specific to what took place during this project.
‘Growing a home’ – Belfast
‘It’s this – the ability to grow a home – that’s good – and that’s what I’m doing here, I’m growing a home.’
– Belfast participant
This sentiment – the common need to ‘grow a home’ – summarises the dynamics created in our workshops. The quote above comes from one of our participants, who was sharing what ‘good’ means to her, reflecting on her move to Belfast from Sudan, and beginning again in a new place. She restarted a career in the NHS, and she feels fortunate, although she doesn’t know yet whether her mother can join her in Belfast. Her story is personal to her, but the narrative of resettlement after conflict had many resonances for others in our workshop.
Nearly half of the people in our workshop had personal experience relevant to her story. A few participants resettled in Belfast some years ago – 23 years in one instance. Some were more recent arrivals. Two young men, from Sierra Leone, and South Africa, were still waiting for their settlement applications to go through. They are highly qualified, eager to learn English. One was encouraged by his Mum to take part: he’d get to know more about his new country and build his confidence. Another young man took part because it’s an interesting topic, and he wanted to develop his English language skills. This opportunity felt good for them: it takes a long time for the settlement application process to go through, and it is hard for them to find enriching experiences when they can’t work or take on education.
The experiences in this group speak to the recent history of immigration in Northern Ireland, which has seen a huge rise in asylum and settlement applications since 2022.[97] With increased population has come pressure on services, concerns about division, and a growing voluntary apparatus to manage integration. The reconciliation initiative in Ballycastle, Corrymeela[98], founded to build bridges between divided communities during the Troubles, has been repurposed to help people integrate in this predominantly white society. The voluntary sector has expanded to manage these changes, with a range of small enterprises and networks.[99] This is how many of these participants heard about this project in the first place. One of the participants, who was once an immigrant herself, works for one of these initiatives, helping others to manage integration.
The ‘local-born’ participants also know what it’s like to resettle after conflict. Many of them think of themselves as ‘Post-Troubles’ (they rejected sectarian identifications during the application process), but the legacies of the past remain tangible: gaps in education and pan-generational poverty from years of systemic discrimination and slow economic growth.[100] People don’t talk about it too much, but our community researchers talked about the generational traumas underneath the surfaces of conversation, which rise up here and there: references to growing up in what people describe as ‘brutal’ areas and familial experience of police abuse; the humour used to assess and dissipate threats, a habit ingrained from living with violence.
For both ‘newcomers’ and ‘locals’, we noticed that being together in this room affirmed something important about themselves and how they want to live: they are all ‘growing’ a vision of Northern Ireland that welcomes and feels welcoming; that knows, based on its past, both the consequences of division and how to repair it. The commitment to service to community others was palpable in the room, as people demonstrated care and respect for each other in their interactions. When they shared their life stories, we reflected on how so many help others in one way or another, through trade union work or in the voluntary sector, helping children or homeless people. Two participants advocate for neurodivergent communities, having experienced ADHD and exclusion themselves. This project is a really important initiative for them: AI has been a leveller and they believe it can be so for others.
They all came with hope for Northern Ireland, but they recognise it needs work. The distinctions between ‘newcomers’ and ‘locals’ hung over them, sometimes interrupting the flow of group conversations, as communications break down, or the banter of ‘locals’ dominates. Our community researchers referred to the ‘crouch position of distrust’, which people in Northern Ireland have learned from years living with threat, and which surfaced in conversation. Occasionally, misunderstandings arose due to the complexities and intersections of identities in the room. A young Muslim woman (a second generation immigrant) at one point had to correct assumptions: she’s British; she moved to Belfast from England last year. Others felt misunderstood, too: a white man in his forties talked about his feminism and raising four daughters.
Together, these diverse people worked up a new vision of, and future for, Northern Ireland that is progressive and positive, welcoming and forward thinking; their conversations were their commitment to grow a home they all can live in.
A city of villages – Southampton
Claudia, a community researcher, joked with us about community life in Southampton: ‘It’s a bit like TK Maxx: the jumble can feel overwhelming, but when you dig through it, you find all sorts of treasures and revelations in there that are truly valuable.’
She was describing how community life can feel hidden and dispersed across the city – what her colleague Rae Turpin acknowledged as a ‘city of villages’. The city has evolved over time, pockets of communal action and identity grown from the historical layers of modern and contemporary residential and urban developments, intermingled with sprawl of commercial hubs, ports, green spaces, waterways and nature reserves. Low-income communities inhabit the fringes of ports and industrial sites, suffering the worst effects from pollution. Multigenerational residents live side-by-side with newcomers or transient residents – people who come to work in the city’s big industries, or to study at the university.
The people who joined this project in Southampton came from across these different social, cultural and professional milieus, each bringing unique perspectives on the city and what it means to live here. Some were more ‘typical’ of the broader community – they were students, long-term residents, people working in key local industries. Many were also from groups whose voices and experiences are less heard in mainstream narratives about Southampton, and indeed in public discourse more broadly: creatives and musicians – people who sometimes live day-to-day and often balance precarious work with other commitments; community organisers, and people who have decided to live outside of society’s capitalist norms.
As a group, serving others is something they held in common, whether through health work or caring for people battling addictions, working to eradicate modern slavery, or carrying with them their previous service in the armed forces. Some of them have campaigned to make lives better, holding the city council to account over its pollution targets; others make steady and everyday improvements to communal spaces, through volunteering in community gardens or nature initiatives.
From their conversations, it was clear they had the capacity to hold different ideas about Southampton and what living here means. Life here can feel fragmented, with pockets of grassroots ‘DIY culture’ across the city, but they all often contest the commercialised heritage or touristic discourse that explains Southampton only in reference to its port industry and its docks. The city, they argued, is progressive, energetic, creative and intellectual – it’s not just ‘the place where the Titanic left’ or a gateway to elsewhere.
These differences in narrative between official discourse and community outlooks reflect longstanding tensions: between affluence and innovation, which sit in places like the ports, universities and the hospital, and the significant challenges faced by many communities across Southampton, which many participants had experienced first-hand. They reflect the group’s experience of disconnection between the needs of communities, as they know them, and the commercial imperatives of the private/public partnerships at the docks that increasingly shape the city’s future.
A commitment to social cohesion, as well as shared feelings of fragility, gave this group a common reference point for community. Early in the exercises, the idea of ‘street parties’ emerged in discussions, returning repeatedly as an expression or an intersecting point where these people’s values of community, connection and creativity came together, across different backgrounds and perspectives.
The focus on street parties epitomises some of the dynamics that made this group distinct: it captures their commitment to place-based belonging, shared joy and communal effort in a context where social life is highly fragmented and lacks capital investment. It represents a reclaiming of public space and recentring of human connection in response to isolation, which some participants expressed concerns about. It also reflects a deep longing for informality, joy and spontaneity in civic life. Street parties were an ideal, but also a bitter paradox – speaking to the value of grassroots life, as well as its fragility and transience; expressing a longing for reciprocity and resource sharing, where it often did not exist.
A community portrait – Brixton
‘I could see people of all colours and backgrounds. I could see people from the local Black community, the LGBT community – some foreigners, people struggling to speak English. There were the plain ordinary moms like myself. I just thought that was a really good representation [of Brixton].’
– Brixton participant
Brixton brings multiple communities together, and the people who took part in this project came from a range of its diverse populations. They had different backgrounds – Caribbean, African, South Asian, European, Portuguese – demographics that feel typical of the area’s diversity. Some were first- and second-generation immigrants and remained deeply connected to their cultural and social roots, especially Brixton’s Windrush generations. The LGTBQIA+ community, which has a proud place in the area’s history and its culture,[101] was also represented.
Communal life here thrives on connections. People from different backgrounds come together – it’s a dynamic blend of cultures, music, food and art, festivals, carnivals. There’s a deep sense of pride in local heritage across Lambeth and south-east London, especially in areas like Brixton, Peckham and Streatham, known for their Afro-Caribbean, African and South Asian influences. Grassroots organisations and local initiatives play a big role in addressing social issues, fostering inclusivity and celebrating diversity. The spirit of collaboration is strong, whether it’s through youth programmes, cultural festivals, or activist or support networks like those for dementia awareness, over-fifties groups, or breastfeeding groups that support mothers facing the isolation of parenting. Two participants were advocates of a local initiative for the social inclusion of girls and women.
Many of our participants shared a commitment to the community through their involvement in local causes, housing rights, and social justice or taking part in advocacy and community support networks. Many were vocal about the challenges they face, including systemic inequalities, and access to mental health support. During our conversations about public good, we heard echoes of some of their past and more recent challenges: references to the Brixton riots in the 1980s and 2011, the Windrush scandal and the effects of hostile environment policies on local people, the overpolicing of Black people in the community, and the disproportionate impacts of COVID-19.
Many of them sensed that the area is changing. Gentrification, alongside the interventions of property developers and estate agents (‘criminals in suits’) is altering the social dynamics that many recognise as particular to Brixton. Long-term residents can no longer afford to live here, as rent and property prices rise, and alter the local economies of the high street and the market. People enjoy the greater amenities and opportunities, but they’re worried about the unique characters of their neighbourhoods and whether this change benefits people like them. Food prices are rising and schools are closing because families can’t afford to stay in the area.
When the group came together, however, they reflected the solidarity and resilience that’s present in the local community, which you can see when people interact around the high street, in the markets, in mosques and churches or when urgently responding to pressing social issues. This reflects the historic community effort, which the community has had to mobilise in response to urgent social challenges. In the aftermath of the 2024 Southport riots, the community made another collective stand against racism in Windrush Square, demonstrating some of the resolve against discrimination and hate they showed in response to the murder of George Floyd in 2020.
Each person in these workshops was careful to accommodate each other and embody values that have meaning in this multicultural society: kindness and compassion, mutual respect for differences in cultures, and creating space and acceptance for others’ opinions. They worked collaboratively, demonstrating another core belief that many in the group expressed and reflected about: that community and change take effort and collective work, which everyone has a role in achieving.
What is good?
‘Public good’ is not an everyday term in UK political or public discourse. In political philosophy and economics, it refers to things or services that are, by their nature, open to everyone (‘non-excludable’) and not diminished by others’ use (‘non-rivalrous’). There are disagreements about the definition of public good, whether we should think about ‘local’ or ‘global’ public good, and whether a market economy can provide public ‘goods’ (services, systems, food, utilities), in the context of the inherent aspects of competition.[102]
Public good has featured occasionally in UK policy before the advent of AI technologies, when justifying investment for technical-driven innovation.[103] Governments of different political make-ups have engaged ideas of public or ‘common’ good in formulating political policy agendas, such as the ‘Big Society’ (2011).[104] There are live enquiries into the nature and state of public good – and its relationship to systems, products and services produced in the private sector and enjoyed by individuals – for example in education,[105] [106] and health data.[107] [108] There have also been prominent civil society initiatives to probe what public good or a good society look like from the perspective of poverty advocacy, for instance.[109]
None of these initiatives have created a shared societal discourse around public good, however, and when we engaged people to think about public good in this research, we expected to hear diverse perspectives and interpretations. We asked them to define it in their own terms and did not seek to achieve a consensus. A few people were familiar with, and sometimes referred to, previous incarnations of public good in policy agendas, but this was rare. Most people took very different perspectives and approaches to the concept, bringing diverse assemblages of community assets, services, infrastructures, relationships, ethics and values, and feelings, which are discussed below.
What we learned from Belfast, Brixton and Southampton
These three place-based enquiries tell us something important, if not representative, about how and why people and diverse publics in different places across the UK may think about and experience AI differently.
Brixton’s case study explores what implications and diverse meanings public good and AI presents for diverse people committed to multiculturalism and inclusion in the face of endemic social inequalities.
Southampton’s tells us how public good and AI appears to dispersed and fragmented communities have grown up around the fringes of port industry and its public/private partnerships.
Belfast’s case study distils how public good and AI relate to diverse people (migrants, ‘locals’) who are all learning to (as one participant described) ‘grow a home’ in a post-conflict society that is still managing and living with social tensions.
Together, these case studies help us to interrogate how historic, social and structural forces position people within and towards the ‘AI revolution’; they invite us to consider how much ‘place’ – in terms of political culture, identity, grounded social networks and differential relationships to power – matters to how people in the UK will employ, encounter, make sense of and manage the introduction of AI into their lives – the benefits they see in deployment, the mechanisms they trust and what they want from decision-makers, and the hopes they have in delivering technologies that are aligned with societal benefits, as they see them.
‘Place’ is by no means the most important feature of these case studies or the most influential determinant of views. By looking across these case studies, we can see other factors and features that contribute to shaping an individual’s positioning towards AI and public good – such as age, cultural background, and lived experience – which we also elucidate and point to throughout the report.
The many varieties of public good
‘For some, the notion of “good” is tangible and deeply personal, rooted in their daily experiences and immediate environment. For others, it is an abstract but urgent structural issue, requiring intervention at the level of policy, economics and [state] power.’
– Rae Turpin, community researcher, Southampton
This observation from one of our Southampton community researchers identifies diverse ways and vantage points through which people engaged with the idea of public good. This reflected not only their values and their lived experiences, but also fundamental differences in how people understood society and change: the value they placed in individual autonomy and agency versus the power they accorded to societal structures or state power.
When people engaged with us in exploring the question of public good they articulated a collection of ideas, which were composites rather than defined agendas; a good society had many forms and foundations, which they thought of as sensuous as much as systematic. They thought of feelings of goodness, arising from close relationships and human connection, which were built on core services or necessary community infrastructures.
‘The workshops and discussions have broadened my understanding of public good by highlighting how diverse and subjective the concept is. For example, some participants valued practical needs like health care and education, while others emphasised freedoms and opportunities for self-actualisation.’
– Southampton participant
Public good in the everyday
‘Public good often begins with small, quiet, everyday practices of solidarity – holding a door open, sharing food, checking in on a neighbour. These moments of care ripple outward, shaping the kind of world we live in.’
– Rae, Southampton
When people talked about public good, now or with reference to their hopes for the future, they imagined this concept in the everyday: walking through a local market, for instance, and exchanging knowledge and insights with others; a world where everyone feels as just as special and valued as they went about their everyday lives, as a celebrity might feel on a red carpet; a ‘trip to the coast over the common’ using sustainable transport and ‘stopping to feed the birds’.
Global phenomena were seen through everyday realities: the taste of clean air and peaceful quiet, in a traffic-free society. Participants envisioned walking around the ‘new community full of green spaces’ and seeing no cars on the road and solar panels on all the buildings. They imagined headlines in the news that communicated that a good society had been achieved in all sorts of ways (‘every child has access to what they need to thrive’, ‘world peace announced’).
In this way, the hopes they expressed as part of their conception of public good reflected everyday challenges and concerns in their communities: the air and noise pollution from the ports or traffic, which impacted disproportionately on people who lived in deprived neighbourhoods; the lack of time for community and connection, which came with unrelenting schedules or precarious work; or the sense of moral harm that came with living in a world with egregious risks and inequalities.
Community life and relationships
‘It was easy for them to do, to express what good was. It gradually came out, because it’s coming from within. It was: “these are our morals, this is who we are as a community, and …this is what we think good should be, and what good is now.”’
– Natoya Whyte, community researcher, Brixton
‘Public good is a community that is together, that you can support and rely on.’
– Brixton participant
‘I’ve put my Granny – because she was all the good in the world for me.’
– Belfast participant
The people who took part in our workshops thought of public good as deeply rooted in their relationships and a sense of belonging and identity, which they circled out from their primary, familial relationships to encompass friends and community. Some extended their thinking to include people outside of their ‘bubbles’, or whose politics they didn’t share; others referenced other areas, regions and nations, thinking of the public globally to embrace everyone.
These building blocks gave people the basis for achieving a good society: a sense of care and connection, received from stability and grounding in a place, where you could develop networks, and rely on relationships and the support of others.
Everyone can thrive
‘They had the same appreciation of values in the community, values of togetherness and being at ease with each other and the environment; of ‘let’s have enough for everyone and make sure we live life in peace’.
– Claudia Murg, community researcher, Southampton
Underpinning these relationships were basic needs and provisions that people could rely on, which many felt were lacking in their communities: high quality housing; food and financial security; services, such as education and health. Coming through in these conversations were the challenges that many people faced in accessing healthcare and mental health support. It was too hard to get GP appointments, or trust that your child had the best education that nurtured their talents, capabilities and interests.
Everyone felt that people needed relief from precarity, and the opportunity to develop the feeling of confidence that comes from stability, so they could build and nurture the relationships that mattered to them, contribute to society through meaningful work, or engage in social action to support others.
‘Within Lambeth we have wards where poverty is declining, and other wards that are getting better off. We see this widening inequality gap everywhere; you step outside into Brixton, and you will see increasing levels of homelessness, addiction.’
– Anita Kambo, community researcher, Brixton
‘Public good means abundance for all.’
– Brixton participant
‘Ensure there is enough, for everyone, forever – through fair shares for everyone, removal of waste (especially food waste) and a universal basic income.’
– Southampton participant
A life free from fear and uncertainty
‘Public good means safety from fear for everyone.’
– Brixton participant
Living a ‘life free from fear and uncertainty’ was foundational to public good for most people. Safety and security meant much more to them than the provision of institutions or services, such as the police or army. These were referenced in conversations only twice, with community researchers in Belfast and Brixton underlining the underlying historical factors, which still made those relationships difficult today. Instead, when people thought of security, they envisaged a sense of ‘social protection’ grounded in place-based relationships, and underpinned by financial wellbeing and reliable (for some, also meaningful) employment.
Conversations about security reflected place-based differences in lived experience. There were different tonalities for people with personal knowledge of conflict and violence – the participants in Belfast, for instance, seeking asylum; or those with emotional legacies of conflict in their families; or participants in Brixton who referred to riots or knife crime. For them, public good meant living in a society without war or violence. But for many others, too, public good meant security at the local and global level: living in a world where they could be assured that others far away from them would not suffer either.
‘There were already existing concerns about policing, including how “stop and search” operated in Lambeth. These fears were exacerbated during the COVID-19 pandemic, with concerns about fairness of policing around lockdown restrictions. The murder of Sarah Everard (in Brixton/Clapham) resulted in further distrust of policing institutions, which became further exacerbated by police handling of the Clapham vigil. There was much public anger in Lambeth (and nationwide) to this.’
– Anita, Brixton
A clean environment, with access to nature
Local and global public goods also intersected in conversations about nature and the environment, which was a theme across all workshops, and a point of consensus for all the groups. Most people’s visions of good featured backdrops of natural themes – trees, waterways and green spaces. They presented worlds where environmental stewardship was baked into social life, where people cared for nature and animals, and where renewable energies or eco-friendly transport featured as part of their everyday realities.
Behind these desires lay the value they placed on their local natural amenities and green spaces, but also their lived experiences of some of the worst impacts of industrial societies: the need for ‘freedom from pollution’, whether industrial pollution from the docks, or from the traffic along the busy South Circular road in London.
Public good is values-based
‘Public good was made up of strong values that seemed to be widely held and supportive of a good society – treating others well, being respectful and kind.’
– Natoya, Brixton
‘Respect is everything: you can respectfully disagree and have different opinions.’
– Brixton participant
For most people, ‘good’ was an intrinsically values-oriented or morally driven concept, and grounded in ideas of fairness and equity. For most people, the concept evoked a society that was built on shared values, and a strong desire to treat others well and respectfully. They foregrounded values connected with equity, including diversity and inclusivity, as values that mattered for multicultural and culturally diverse communities. These included relational values that helped to manage conflict, difference or division – forgiveness, empathy, ‘recognising others’ or ‘listening and learning from others’ – as well as foundational values expressing the reciprocity of social relationships: care, compassion and kindness.
‘Having a diverse community is the biggest thing.’
– Brixton participant
These conversations were shared across all sites, but they were particularly pronounced in groups with common experiences of racism and discrimination. Belfast’s group, for instance, built solidarity around those with current or past experiences of immigration, supporting people to talk openly about their experiences of exclusion past and present. Their concept of public goods foregrounded the acceptance of others and the value of cultural difference, often expressed in simple ways, such as the images from Wicked (released at the same time as the workshops) featured in collages. They also acknowledged racism as a ‘public bad’, and some shared personal instances of abuse based on their cultural choices of clothing or hair, which had eroded their feelings of acceptance and self worth.[110]
Many people believed that diversity brought strength to communities, helping everyone respond better to collective challenges. They referenced various forms of diversity that contributed overall to public good. These included differences that come from culture, age or background, but also from neurodiversity or learning disabilities. At least two participants in each of our sites had direct experience of social, workplace, or educational exclusion related to learning disabilities such as dyslexia, or neurodivergence (including ADHD and autism). This invited considerable reflection on the value that different outlooks and creative minds can bring to public good.
Individual choice and autonomy
‘For me, I prefer a value-based take on the public good, since it empowers the individual – me – to do what I can within my sphere of control.’
– Southampton participant
Commitment to community, and shared values, did not obscure the role of the individual as a core component and driver of public good. In fact, as the quotation above illustrates, some people emphasised values because they accord individuals more power and responsibility to contribute to the public good, as well as to benefit from it.
‘To be at their happiest, people require social and creative freedom.’
– Southampton participant
Southampton, of all the sites, placed more emphasis on values that centralise the individual as the locus for social action, as well as individual self-actualisation as a central component of public good. This reflected some of the group’s liberal outlooks, as well as its high numbers of people who explicitly valued autonomy in their own lives, such as freelancers or creatives. This brought discussions of politics into the room: advocating for Universal Basic Income, for instance, to allow everyone to realise their potential; and calls for deliberative democracy in local governance to recognise the power of individuals’ agency and rationality. Their emphasis on underlining how joy and fun belonged in any vision of ‘good’, represented in repeated references to street parties, also conveyed ideas about mutual reciprocity and cooperation, where individuals come together for a common purpose, and with considerable space for personal choice and freedom.
Difference and disagreement
Within discussions of different views and perspectives, some values came into tension or conflict with each other. Conflicting values caused tensions in some conversations, reflecting identified ideological differences over the nature of freedoms and interpretations of liberty within a public good framework.[111] Tensions between concepts of ‘freedom of speech’ and the ‘freedom to offend’, for instance, became apparent during a group conversation in Southampton, which was not resolved. In Belfast, tensions arose around values that were seen as economic values, such as efficiency, with some questioning whether this could be compatible with other values in the room, especially people’s emphasis on equity and relationality.
Community solidarity and social action
‘I feel that the community is prepared to show resilience in pushing back against power imbalances and people are willing to support one another. It’s the group’s unique, deep-rooted cultural diversity and the strength that comes from it that impresses me the most. In discussions, the sense of pride in the area’s multicultural history came through strongly too.’
– Natoya, Brixton
‘The cohort expressed a feeling of disempowerment. They were disillusioned with public services, frustrated, and sometimes angry. Yet they came into all of the workshops with a strong voice, regardless of background or circumstance.’
– Rae, Southampton
People saw community solidarity and social action as ways in which ‘good’ could be achieved for everyone. Behind these perspectives were layered lived experiences of day-to-day respect for people from different cultures, as well as serving and supporting others and achieving social changes, both small or large. Participants shared examples of their community involvement, from working in community gardens, to advocating for better management of pollution targets at local level, to helping friends or colleagues advocate for themselves with landlords, employers or schools.
‘When people were writing the letters to politicians or to local representatives, they didn’t focus on the UK government and they didn’t focus on Europe. They focused local, and that was an interesting thing.’
– Paula Quigley, community researcher, Belfast
When people imagined how public good became a reality, they held considerable space for themselves, as individuals and communities, to participate in that effort and collaborate in achieving a good society that everyone could benefit from and live in. They didn’t look to decision-makers or systems to give the answers. Instead, they imagined public good as a shared enterprise, in which they all participated in big and small ways. These could be taking ownership to address a ‘big challenge’ (like climate change) in their homes, streets or neighbourhoods; or working together in community solidarity to build action for social change.
Distrust
‘People mentioned things they would like to happen caveated with a lack of faith in such things materialising. At a local level, actions included the local council playing an active role in the community, allocating resources based on community need. These conversations were often grounded in concrete needs which emerged at the infrastructural/resources level, such as affordable housing and access to local schools rather than schools across town.’
– Rae, Southampton
While these perspectives testified to many community strengths, and especially the existence of strong voluntary sectors in these areas, participants’ faith in community also reflected a latent distrust of many power holders. The community researchers explained that this is often related to specific and sometimes longstanding negative experiences of external decision-making and poor treatment in services and governance. In fact, reliance on groups and voluntary bodies was often a corollary of social isolation and systemic exclusion, and testified to the work that had yet to be done in communities that were still struggling with social inequalities.
‘Communities can feel that decision-making often happens at higher levels, with little regard for the voices of those directly affected. If the community feels overlooked or unheard it sparks conversations about the need for more control over local resources and public good.’
– Natoya, Brixton
This had different dimensions for each of our communities. In Southampton, for instance, the local university, while a prominent employer, was also responsible as a landlord for what one participant called the ‘demolishing’ of public spaces. People expressed doubts that the local council had public needs in mind when negotiating with the docks, or that MPs in Parliament could make good decisions about others (‘If you’re more power hungry, do you lose your values?’). Their emphasis on what was good in their communal lives reflected real concerns about ‘splintered communities’ and the wider atomisation of a consumer-driven society.
People voiced some similar sentiments in Belfast and Brixton, but community researchers explained that there was additional and historic experience of systemic discrimination and institutional abuse, which they thought shaped people’s views. In Brixton, people did not expect to receive fair treatment in public services such as education; they had been historically underserved by health services, and especially in mental health provision. In Belfast, community researchers explained the distrust of others (especially outsiders) lay beneath concerns about whether everyone would engage fully and responsibly in making ‘good’ work for everyone.
Key takeaways: Moving towards a definition
When the participants in Belfast, Brixton and Southampton thought about public good, they brought forward ideas that align with various participatory research projects in the UK that have explored public good or adjacent concepts:
- They felt that the concept translated as ‘wholly good’, and involved positive and morally situated outcomes, rather than pragmatic trade-offs on the basis of majority benefits or interests.[112]
- They agreed that public good meant tangible positives for everyone, centralising ideas of fairness and equity.[113]
- In relation to community and connection, they were clear that these are foundational to ‘good’, and they recognised the importance of reciprocity and helping others as part of public good. [114]
- Underpinning all this, they felt that people should have a decent standard of living and material comfort to enable them to access and contribute to a good society.[115]
When policymakers use terms like public good, therefore, they must recognise that there are core meanings, priorities and ideas that people from diverse backgrounds and with different experiences will expect to see reflected in regulation, policies and the provision of ‘public goods’ like health, education, or welfare.
‘Public good’ is a meaningful concept for people when engaging on societal issues. The evidence here suggests that the sympathies, values and feelings associated with public good are robust even in the context of factors that have the capacity to atomise and consumerise relationships, like AI technologies.
Making sense of AI
During the project, we found that a collection of ideas about AI technologies – the rapidity of growth and technological change, the extent of their impact on society and transformative potential, and uncertainty about future developments – featured strongly in how our participants encountered and made sense of AI. Most of our participants thought about AI as a group of technologies that were increasingly becoming more prominent in their social worlds, but which had ‘not quite’ become integral to them or part of their everyday. They expected this to change soon.
These views speak to conditions created by the emergence of technology, a concept that describes the processes where technologies ‘come into being’ or become more visible, important or prominent in society.[116] Used in science and technology studies, ‘emergence’ offers a way to envision and operationalise the processes of technological innovation.[117]
There are varying definitions of what constitutes an emerging technology, although scholars refer to five core conditions: radical novelty, fast growth, coherence (e.g. an industry, sector or community of practice), prominent impact and – lastly – a state of ambiguity and profound uncertainty, where there is expectation that a number of possible, and even contradictory, uses or outcomes for the technology could be achieved.[118]
Emergence is not just about material conditions of technological transformation; the process is made real through a related set of ideas or ‘imaginaries’. Anthropologist Sarah Pink has argued this perpetual state of unfinished-ness and expectation is fundamental to the shared meanings and production of emerging technologies. In her work with stakeholders, policy, industry and users, she sees that ‘emerging technology occupies an anticipatory space’, which is predicated on indeterminism and limitless possibility.[119] We found that, for the diverse publics who engaged with this research, these are also everyday outlooks and ways of thinking about AI.
‘I’m feeling optimistic because I can see the benefits of it. Optimistic but also a little bit nervous because it’s the unknown? So I guess that does give you a bit of unease, like a bit of uncertainty because you just don’t know what the next AI tool will be.’
– Participant
We could think about participants as occupying different stages of awareness or adoption when it comes to AI. Some people had started to see more stories about AI across the news or social media, or had heard others talk about it. Some had themselves adopted some AI tools for leisure or work, finding some tools helpful and ‘using it a lot’ in some cases for specific tasks. Three participants had examined AI-related topics as part of undergraduate or postgraduate study (creative arts, aeronautics and diagnostics), and some came with very little knowledge, apart from having heard the name.
‘If Iʼm being honest, I didnʼt know what Al was before the sessions.’
– Participant
‘Not quite living with’ AI
But the challenges that people were encountering in their lives went beyond issues of whether a tool worked for them or not, and reflected how AI was beginning to interface with their everyday lives and relationships. They saw radical novelty and fast growth, associated with AI emergence, and that AI had begun to present them with unforeseen dilemmas, which required them to find new social norms or information in response. They engaged with the project in part because they hoped to find the answers to some of these questions from other people in their community or from a trusted authority.
For many people, AI had appeared on the fringes of their everyday.
A student had started to see their peers using AI in schoolwork but felt ‘very unsure’ whether they should be using it at all. Parents talked about teenage children adopting AI-enabled technologies in school or for undergraduate work, as well as for entertainment; older siblings thought about younger children and what challenges AI might present to them while growing up. They wanted to ‘keep up to understand the changes’ for them and for the younger people in their families and communities.
Some people referred to live debates in workplaces or the strangeness of phenomena created by AI: their colleagues in customer services being mistaken for chatbots by customers, for instance. Those who worked in the voluntary sector and with vulnerable people wondered how AI might affect the people they cared for, as much as how it could work for them.
Many people associated these changes with a growing AI and technologies market, which they saw as primarily profit-driven. In this, they saw a degree of coherence – recognising a sector, an industry – in AI deployment: software developers and technology companies, as well as ‘Big Tech’. Many referenced tech entrepreneurs (especially Elon Musk) as the main power brokers.
But they also sensed a wider, systemic use of AI either existed – or was about to exist – in government, the public sector and society more generally. They wanted to understand this better so they could navigate and comprehend those changes. For them, the emergence of AI was as much about information seeking and awareness as actual technological change.
‘I would say that in terms of how it impacts my day to day, it’s almost coming quite silently because we don’t realise we’re using AI. Like a lot of what I’ve learned has kind of shocked me because it’s really embedded in my day to day.’
– Participant
AI in their everydays
For most people, consumer tools formed the core of their awareness and familiarity with AI: 64% of posts on our Community Wall, which invited people to post where they had seen AI ‘in the wild’, concerned consumer technologies connected to media leisure consumption (Spotify, Netflix), smart home devices (Alexa) or specific applications they (as individuals) had purchased for creative projects or for use in their work.
When people talked to us about using AI for work, they usually talked about conditions where they had high individual autonomy and choice: they were self-employed or worked in small voluntary enterprises and social initiatives where they experimented with AI tools to help deal with underfunding or restricted resourcing. Some worked in creative industries (music, graphic design) or in arts organisations, and used AI as a time saver and efficiency maximiser for administrative work. This entrepreneurialism created a set of outlooks that were already optimistic about AI under certain conditions.
We also found that experience of neurodiversity and dyslexia created a set of enthusiasms in relation to the potentials of AI deployment for the future. At least two people in each of our sites participated in the project partly because of their lived experience with ADHD or learning difficulties. After years of struggle and exclusion at school and the workplace, AI tools had enabled them to compete with others on a level playing field. Emails that once would have taken a whole day now took a few minutes; reports could be proofread quickly and without reliance on other people; automation helped create routines that stuck.
People who had experienced the exclusion in education that can accompany ADHD and dyslexia felt very passionately about the benefits of AI tools generally. One man in his 40s, who had come to higher education late in life because he had grown up with undiagnosed ADHD, had embarked on a PhD study to identify how and where AI could streamline and widen access for ADHD diagnosis. He felt that AI allowed the creative minds of neurodivergent people to shine through, and believed in its wider benefits to work for the ‘public good’.
‘AI could be one of the most productive barrier-breaking technologies that we’ve ever had. It can break down barriers for people who are neurodivergent. If you’re dyslexic, it can give people like that a crack at the whip like anybody else.’
– Participant
Expectation of change and impact
‘I do genuinely think there’s a lot of potential. I think in a lot of ways it is used in our lives already, but probably in a slightly underhand way that we don’t always know about. And I think I’m just still to be convinced about how that will roll out in the future. I have absolutely no doubt about the impact that it’s going to have, though. It’s going to be everywhere and we have to get on board with it.’
– Participant
When people tried to make sense of AI, they thought about specific technologies in different socio-technical contexts. But they also grasped another set of ideas about an ‘AI system’, which saw these technologies as part of a wider infrastructure, which could cause other unforeseen consequences and ripple effects, even where the technology was not directly applied.
Most people, therefore, saw AI as a transformational intervention across society, rather than just discrete tools that could (or could not) be applied in particular areas, and carry benefits and risks for individuals. AI use – they felt – would inevitably amount to systems-level changes across all aspects of society. For many people, this translated as a de facto increasing of corporate influence and a heightening of profit-making imperatives in their lives.
‘AI is a form of control of the human race.’
– Participant
This made AI’s impact, for many, paradigmatic: a force that would transform or disrupt everything at some point in the anticipated future. People could hold different views about the nature of this transformation, as the above quotes suggest – whether they were barrier breaking (at one end of the spectrum), commercialising or oppressive (at the other) – but they agreed that the changes would be extensive and impossible to avoid.
So while it was possible for people to develop views about specific technologies in particular contexts, it was also possible for them to hold a parallel set of views, which related to the wider systemic aspects of AI.
‘It has struck me that the ramifications of AI amount to yet another form of colonialism. Just like the capitalist/corporate system is probably impossible to extricate yourself from, the coming domination of everything we see, hear and do – by algorithms – will make it very difficult not to be a part of it.’
–Participant
AI literacies
‘I want to know what AI actually is, instead of the conspiracy theories and online information overload, or the diversity of personal opinion I get from my friends at my local!’
– Participant
‘I would like to find out as much as possible in the available time. I already have some ideas what AI is and can do, but these are just my own theories. It would be great to check if they match or not with the reality of AI capabilities and limitations.’
– Participant
‘I want to know what happens to our ideas of the worth of personal intelligence, and how it can be used to really benefit communities.’
– Participant
‘I’d like to know how to use it for business support.’
– Participant
This process of sense-making raised questions for us about the role of AI literacies in public acceptance, and what different kinds of knowledge are involved in getting to a point of literacy.[120] The quotations above are in response to a question we posed during the onboarding process, which asked participants to tell us what they hoped to learn about AI during this project. Some wanted specifics of societal use and community applications; others wanted to wrestle with more ethical or philosophical questions about the impact of machine automation on human values and our sense of ourselves. A few wanted to develop specific technical or tool knowledge, which they could use in work contexts or for hobbies. This demonstrates that AI literacy is not singular, but consists of other literacies, including civic literacy, data and digital literacy, information and media literacy and emotional literacy.
Rather than thinking of literacy as a narrowly skills-based set of competencies, we aimed to foster ‘critical AI literacy’.[121] This framing is intended to support people to not only understand AI’s technical functionalities, but also engage critically with its sociotechnical effects across their lives. This enabled them to confidently bring in their own experiences as expertise and to explore ideas about AI use for themselves as individuals, for their families, and in their workplaces, communities and society more generally.
‘Digital, media, data, information and civic literacy requirements converge as AI seeps into all digital environments, underpins and creates the media we consume, fuels and is fuelled by datafication, distorts the information environment while simultaneously expanding it, and changes relationships at community and geopolitical levels.’
– Tania Duarte, We and AI
To encourage participants to learn from each other’s social and technical expertise, and develop their views in relation to others’ perspectives and lived experience, we adopted a dialogue and enquiry-based model of learning, suited to contexts where there are multiple positions and perspectives and where there is no clear answer.[122]
Through these means, we aimed to counter the ‘deficit model’[123] approach to public engagement on AI, where practice proceeds on the assumption that publics lack the knowledge they need to make informed (and therefore positive) judgements of technology.[124] The deficit model can also work on similar assumptions of a trust deficit, where a consensus-driven engagement essentially disappears conflicts or tensions to meet broader instrumental aims.[125]
Making sense of AI through social rehearsals
While we often used arts-based methods to catalyse this process, and to support people to reclaim the idea of, or narrative about, AI from corporate to public interests, we had greater success with forms of social rehearsal, such as role play or scenarios methods, which are well-established approaches to engaging people with particular trade-offs, particularly in public dialogues concerning automated decision-making.[126]
While we drew considerable inspiration from critical approaches to futuring and visioning work,[127] the everyday lens of this research project engaged people most successfully in thinking through AI use and its implications for different people. The learning phase workshops, for instance, used game-based learning and role play techniques to simulate decisions about AI use in familiar contexts (e.g. a GP surgery). Our community researchers across different sites devised various forms of role play in the final workshop, which moved these forms of social rehearsal from well-trodden ground (e.g. automated decision-making in policy contexts) towards more mundane dilemmas, which helped people think about AI’s impacts in different ways.
There were observable differences between the groups in what kinds of scenarios they were ready to engage with, which may relate to political cultures and civic capacities in the room. Our Belfast cohort, for instance, engaged with scenarios on specific policy areas (e.g. policing, transport) that were relevant topics for the community. Our community researchers reminded us this was in part a question of political relationships and how much closer people in Northern Ireland were to people ‘in power’.
But, in many ways, our enquiry represented a wider opportunity for positive effects of social rehearsal, where people learned from others about how AI had worked for them, exploring AI risks and opportunities through understanding how others engaged with these issues. We found that encountering people with different perspectives or experiences of AI use seems to have helped participants to modify or develop their views, as illustrated in this participant’s reflection on talking to people who had strong views about positive uses of AI from their experience with ADHD:
‘I think until I came here and had conversations with the people that we’ve spent time with over the last few weeks, I was very cynical and probably quite fearful of what AI is and meant. But I think after speaking to these guys, there’s a lot of potential for it.’
– Participant
During these social rehearsals, in line with sociologies of data practices, we noticed that people engaged their emotions to help them make sense of a topic and identify their moral boundaries about what it meant to them.[128] Because of its entanglements with social action and meaning-making, emotion is recognised as an important component of deliberation,[129] and participatory engagement with AI and digital systems.[130] For this project, feelings were recognised as an important communicative and realisation tool in our workshop spaces. Sometimes these were uncomfortable feelings, such as feelings of uncertainty, where people often found no clear resolution. Sometimes they expressed more positive feelings (excitement, curiosity), which engaged others to collectively grasp the AI imaginary and make it work for them.
But we observed there were some negative conversational dynamics that flowed from this, albeit infrequently, which may relate to societal discourse and how emotions are understood socially. Particularly, we observed that some participants saw a strong association between emotions and capabilities, especially between fear and ignorance, in the workshops. This association, although not widespread, created difficult interactions, and prevented some people from being listened to on views that expressed worry, concern or anxiety.
Difficult conversations
There were some areas, too, where social norms or sensitivities made some topics harder to rehearse in our participant-led enquiry. A good example of this relates to faith and religion, which was an intrinsic part of how many people, especially some Muslim participants, envisioned public good now and in the future. We observed how these conversations did not develop further in our final workshop, when it came to thinking about the impacts of AI.
This was a limitation of our participant-led approach. Britain’s ‘faith covenant’ recognises the role of faith and faith leaders in communal and public life. There are many reasons to think through the role that such communities or leaders could play in AI literacy, advocacy for minoritised groups, or in exercising moral leadership for public good.
These discussions would have demanded greater structure, more information resource and engagement with enclave or faith communities, to take place safely and with sensitivity. Northern Ireland’s history of sectarian division, for instance, made institutionalised religion more of a ‘public bad’ for some people in the workshops, which left little room to reflect on positive roles for religious institutions.
Ambiguity and uncertainty
‘I think AI will enable us to do things that will never have ever, ever, ever been possible before, but equally, that we cannot possibly imagine now at this stage, because we’re not there yet.’
– Participant
Underlying all these views was a recognition of AI as an inherently ambiguous and uncertain object. For most people, the state of AI technologies was continual change and reinvention. While in some ways this ambiguity laid the context for an enthusiastic visioning and sense of possibility – captured in the description above from one of our participants with neurodivergence – it also created a considerable sense of uncertainty over what future outcomes might be, because they presented a multitude of potentials or possibilities.
This uncertainty was heightened by the learning phase, to some degree, where people encountered for the first time technologies that they hadn’t realised existed or that were already widely used. For instance, few people had heard of AI predictive technologies, had engaged with the debates about AI and emotional recognition, or had grasped the range and variation of what generative AI technologies could do in different contexts. In many cases, learning about one set of technologies invited a sense of wonder, but alongside it another vein of uncertainty about what other technologies lurked somewhere else.
The uncertain climate also produced a unique sense-making and workshop environment, which impacted on how people conceived of AI and public good in several key ways. First, it exposed the extent of information and evidence that might be needed to accompany AI use or uphold transparency. Some people felt they didn’t have enough information to formulate futures for AI, because they lacked clear evidence of the bigger picture. Participants wanted projections for every possible future, not simply one set of likely outcomes. Environmental impacts of AI-driven systems, for instance, were roundly accepted as a hard negative and a red line, but people wanted clear and context-specific evidence about the consequences of all choices to use AI-enabled technologies across society.
Second, uncertainty and a sense of doubt became a component of how people developed and expressed their views about AI and its potential for public good. It was common for people to engage very optimistically in visioning possibilities, and to formulate ideas and agendas, but also to step back and heavily caveat their perspectives based on the likelihood of change. Participants’ views on public good and AI were, therefore, composite – a collection of hopes and fears, speculations (‘what ifs’), caveats (‘but ifs’), idealisms and pragmatisms, explorations and curiosities (‘maybes’), and retractions (‘maybe nots’).
Key takeaways: Emergence in the everyday
We found that the concept of ‘emergence’ situates publics’ views in important ways that help to make visible how perceptive and capable publics are in responding to the realities of a fast-changing and complex world, including new technologies. Publics have been perceived as lacking the critical and technical knowledge necessary to make informed judgements about AI technologies, or influenced by cultural narratives that are ‘unduly negative’.[131] Views that appear contradictory are conflated and read as expressing ambivalence, implying a lack of comprehension or capability to deal with AI’s complexity.[132]
But when research provides space and time for distinct views to emerge, we can appreciate how much they reflect what they see and encounter about AI as public discourse and AI technologies as social objects.
Emergence also helps to explain the textures and layers of people’s relational dynamic with AI and how they positioned these technologies towards public good. As the project Living with Data has found, experiences of uncertainty in the wider world opened doors for doubt and distrust.[133] We explore how this manifested similarly, but perhaps even more extremely, in relation to AI in the next section – because AI was, from its emerging condition, an inherently ambiguous and uncertain object. Recognising this previous work in relation to everyday data, we describe the participants in this research as ‘not quite living with AI’ – a descriptor of both how AI is appearing within their social relationships, and of their ways of thinking about it, which was characterised by an expectation of being always on the verge of (uncertain) change.
Public good and AI
AI for public good is not ‘one thing’
People imagined the relationship between ‘public good’ and AI in very different ways from policymakers: public good retained its clarity as a morally driven concept, not simply the object of innovation and economic growth. They continued to centralise the imperative to build a society where everyone should thrive. In doing so, they built on their ideas about public good and explored interfaces with AI. They found some clear resonance and connection between what they understood of AI’s functionalities and their components of public good, but there was also some dissonance. In many cases, there remained a disconnection: AI remained a satellite in their investigation, with people uncertain about the ‘push/pull’ force it exerted on their lives.
Public good means AI works for everyone
Making life better
If AI for public good is about making change for the better, most of our participants saw this as a very simple proposition. They wanted to see higher standards of living, lower energy and food bills, and everyone having access to higher quality housing that served their needs. If AI could help produce these outcomes for them – perhaps, as some Brixton and Southampton participants suggested, by maximising energy efficiency through power demand prediction, which could bring bills down – then it had a clear contribution to public good.
Many people also recognised that AI could play a part in making public services more efficient, which some thought could contribute to better quality of life through improvements in the provision of these services. These conversations touched on specific dimensions of AI use in transport (prioritising, scheduling or controlling congestion), education (access to materials, personalised support) and health services (triage, symptom checkers or personalised medicine).
While there was appetite for and interest in using AI well and where appropriate in public services, people did not feel that efficiency should trump other considerations or core priorities, or constitute a priority for its own sake.
Making life fairer for everyone
‘Their lens was very much about community good, and imbalances that exist within their communities, and how we can ensure that this doesn’t get worse, and how we can even go that step further to make it to even out those power imbalances.’
– Anita, Brixton
If AI-driven innovation presented a gateway for rapid social change, most of our participants also agreed that this should mean fostering equity and inclusivity to make life fairer for everyone.
People who came from minoritised ethnic groups emphasised how incorporating diversity into AI technologies might make meaningful differences in their lives in various ways. Their ideas included an AI-enabled hair tool, which could recognise and accommodate the important cultural differences in hair, and an AI intervention across social media, which would flag where people had posted misinformation about different religions or cultures and help to correct these views.
These were not minority perspectives. People across our sites cared about others aside from themselves, immediate family or people in their social circles. They also wanted everyone to be treated fairly and share in the benefits from AI.[134] In this, their views about these technologies emanated within a set of wider, community-based solidarities, where the groups expressed care for others who faced challenges they did not, for those not in the room, or for those whose politics they did not agree with.[135]
People expressed considerable appetite for where AI use could help to ‘de-bias’ systems, identify areas ‘where people are not represented’ or locate ‘who needs support’ across local society or in public services. They thought of how AI tools could overcome well-known cultural or language barriers in public services, as well as offering digital literacy support across different online systems or platforms. The strength of feeling against AI use that discriminated or undermined rights based on any demographic or cultural characteristics was clear and palpable: any real risk of discrimination against anyone was a clear red line.
‘They wanted AI to be implemented as ethically as possible and that’s coming from a Northern Irish society that understands deeply the value of all those principles, because that’s been part of covert healing that’s gone on in Northern Ireland: we know what good looks like, we’re still trying to get there.’
– Patrick Toland, community researcher, Belfast
Safeguarding autonomy, care and relationality
For most people, AI for public good was a vision for AI where time saved (or efficiencies made) produced greater possibilities for people to have greater autonomy and freedom to nurture the things that gave them meaning in their lives. This included greater opportunity for self-expression, self-improvement and enrichment, as well as providing time to prioritise the relationships that mattered to them: to spend more time with their families and friends, or in community enterprises and caring for others.
Most people also prioritised values of care and connection, to benefit reciprocally from person-to-person care and interaction across all areas of AI use.
This was especially the case for AI use in healthcare, where most people pushed against any AI deployment which they thought traded off human care for automation, or where there was risk of deskilling in these areas. But it also extended into wider areas and a range of concerns, such as the ways in which AI use could erode capacities for empathy and exacerbate division.
These conversations help to contextualise the broader findings in the Ada-Turing AI Attitudes (2025) survey, which demonstrates that diverse publics value efficiencies made by AI across a range of applications.[136] Our enquiry suggests that these positions are heavily caveated when they are placed up against other core, values-based priorities; they also may carry with them assumptions about the opportunities of the potential benefits that efficiency offers.
Spotlight: AI in the care home
Care is an area of focus and development for public interest AI, with various AI solutions proposed to solve the urgent social problems that emanate from ageing populations in Western societies. These have included using AI chatbots or robotic assistants for substitute relational support in care home contexts.[137] Our Southampton participants explored caring in a socio-technical context, thinking deeply about the web of dependencies and relationships that constitute caring.
Context
We asked our Southampton participants to consider various different contexts where AI might be used in workplaces. They were asked to act as decision-makers, deciding where AI should be used and what for. One of their examples centred on using AI in a care home, and the group was tasked with coming up with recommendations and sharing their rationales with the wider group.
Views
The group considered that care was made up by the contributions of many people, including care home staff, patients and their families, but also a wider range of people – cooks, cleaners, podiatrists, funeral directors – who worked together, providing an interweaving web of care centred on care home residents and their families. There was no one person or broad-based ‘thing’, therefore, where AI could take over relational dimensions of care effectively; AI use would have ramifications for a wider set of relationships.
The group saw considerable potential to use AI, but only for backend and administrative tasks. This included helping to collect detailed data on medicines; allowing for better scheduling and ordering of supplies; and allowing better insight for things like menu choices, needs and preferences, to offer a more varied and potentially personalised programme for nutrition.
AI’s functions could make things more efficient, they recognised, but it was only worthwhile if it saved time for staff, who could then spend better quality time with patients.
‘Staff time is the priority that AI needs to save time for.’
– Southampton participant
Conclusion
Our enquiry points to a similar strength of feeling against AI as a substitute for relational care as we find in the Ada-Turing AI Attitudes (2025) survey, as well as others.[138] This evidence also shows that publics’ views are grounded in acute awareness of the socio-technical complexities of what care and caring means in modern societies. These everyday explorations allowed people to think through and rehearse some of the various dilemmas and challenges that face people in different domains across society and highlight the multiplicity of decisions involved when considering AI use in context.
AI for public good means solving big problems
People shared some core enthusiasms for where the relationship between AI and public good inherently made sense to them, particularly where they thought that AI could help overcome challenges that were seen as critical and human-level. These conversations were energising, imbued with a sense of power and hope in the possibility of change for the better. They demonstrate some thematic commonalities with community priorities found in other research on ‘AI for Social Good’.[139]
This is particularly the case for the use of AI in climate science and to foster better environmental stewardship. Here, participants were energised and creative, thinking of ways that AI could be used to improve renewable energy technologies; predict human environmental impacts on ecosystems and weather patterns; or monitor species, plantlife and wildlife at a distance, to protect and foster biodiversity.
Similarly, they felt that AI could help create more sustainable nations, societies and communities, which were less wasteful and more efficient in both the use and sharing of resources. Ideas ranged from sharing food more evenly and carefully in populations, thus decreasing food waste, to helping communities to build and maintain community gardens and orchards, which were open to all.
Similar levels of enthusiasm and confidence in applying AI for public good related to health science and diagnostics of major diseases. The focus on AI and cancer diagnosis, which is very prominent in public discourse, emerged strongly in some conversations. Some who had lost family members to cancer, or whose friends had died young from the disease, felt that using AI to ensure that other people did not have to carry this burden of loss was the clearest example of AI for good that they could think of. Such perspectives offer the personal and lived experience contexts that shape widespread appreciation of AI’s value for cancer diagnostics, seen recently in the Ada-Turing AI Attitudes (2025) survey.
Many people also expressed considerable interest in thinking where AI use could empower people and rebalance power more in the hands of communities. This included where AI could be used for community organisation, and to bring together people facing similar challenges or fostering similar outlooks to improve coordination. It also included how AI might give underfunded communities a ‘level playing field’ by providing the skills and knowledge resources they lacked in comparison to ‘Big Business’.
Within this set of views, a few people saw AI as a means to challenge the status quo, to equip grassroots communities and activists with the emancipatory tools they needed to effect and mobilise systems-level change: to gather and share information for ‘UBI (universal basic income) trials’, for instance, and make this political ambition a reality through mobilising support at the grassroots.
‘AI for public good’ is elusive and tenuous
Even when there was considerable enthusiasm and excitement about the use and benefit of AI in particular areas of society, such views were often tempered by a profound lack of confidence that these results could ever be realised. While some of this came from the challenges of making sense of AI as a topic and its relationship to public good, as set out in the previous section, there were some fundamental aspects of participants’ lived experience and histories as individuals and communities, which meant that the idea of AI for public good appeared both elusive and tenuous.
Distrust of decision-makers undermines confidence
Much of this was caused by pervasive skepticism that AI companies or politicians would work to these pro-social goals or prioritise outcomes that were good for everyone. For some people, the profit motive would inevitably conflict with and undermine any ambitions for public good AI. It was common, therefore, for our participants to vision positive ideas for AI deployment with great enthusiasm, but then to step back and caveat:
‘Can AI fix any issue if profit is at the heart of it?’
– Belfast participant
Tech companies, particularly ‘Big Tech’ and ‘Big Men of Tech’ (most commonly imagined as Elon Musk), were particularly seen as lacking pro-social strategic purpose and without moral or values-based direction, which placed them inherently at odds with public good goals. Many participants thought that, for these actors, AI was currently ‘just a tool to enrich the existing owners of this technology’, rather than a serious proposition for public good.
‘AI prioritises profit, perpetuates capitalism.’
– Participant
‘The corporations producing AI tools’ prime purpose is to sell to people and manipulate opinion.’
– Participant
For others, their knowledge of how corporations behave, and their beliefs about unspecified operations behind government, underwrote a disbelief that AI would ever be used for public good. For them, AI-driven innovation was simply another means to increase and cement inequalities in society and deepen our ‘two-tier system’ between rich and poor.
These beliefs were often not generated through political ideology, regurgitating media tropes or in casual references to online conspiracies. Instead, their distrust reflected first-hand and direct experience of what had already happened in their lives and their communities.
In Belfast, for instance, which is still seeing legacies of crime and corruption that accompanied paramilitary gang culture,[140] these contexts came to the fore in a conversation about the potential for AI to be used to make job applications easier and more accessible for local people. The decision-making group had visioned a positive, de-biasing role for an AI developer, where they built software that helped people navigate application systems and account for cultural language differences. But group doubts and tensions arose when one man, who had worked for a long time in trade unions and with local government pushed back:
‘You’re not getting the true picture – it’s all bullshitting – AI will be brown envelopes in the real world.’
– Belfast participant
In particular contexts, too, such as public services or in workplaces, many participants did not trust civil servants, frontline workers, policy-makers or employers to make decisions about AI that worked to everyone’s benefit.
‘I think it’s no surprise that these problems were seen through the historical legacy of Northern Irish society, still being one focused sadly around contested identities. But that’s true of lots of places. Maybe the advantage of coming to Northern Ireland is you get [issues of] AI and identity politics in technicolour, but it’s probably apparent if you dig anywhere.
– Patrick, Belfast
Spotlight: Facial recognition technologies and policing
Public attitudes towards the use of facial recognition technologies in policing are broadly positive, with the majority of people seeing the benefits of these surveillance technologies. When you ask publics to explore the applications of this technology in specific social contexts, however, attitudes can appear very different.
Context
Our Belfast participants were asked to role play the issue of facial recognition in the Police Service Northern Ireland (PSNI) and to explore whether or how these cameras should be used in public spaces. The decision-making group consisted of majority white ‘local born’ participants from both Catholic and Protestant backgrounds, as well as individuals with more positive and ‘tech optimist’ views. They were asked to establish recommendations for this use, which they presented and debated with the wider group.
Views
The decision-making group felt unable to implement this technology in the Northern Ireland context at this time. While they cited worries around the potential misuse or discrimination that might arise from this use, their primary concern was distrust in the current police oversight structures (the Ombudsman) and the history of Northern Ireland’s contested governance and unresolved grievances.
They focused on the primary need to create trustworthy accountability systems to maintain the progress made in community-police relationships, and resourcing the police, before considering introducing technology into this fraught context.
This decision caused debate in the wider cohort. The call to create another additional oversight mechanism, replacing the existing accountability apparatus, ‘is so twentieth century, so “green/orange”’, challenged one participant. But the group as a whole displayed similar wariness of quick fixes and external interventions, especially from technology companies, with some seeing a wider structural view and expressing a preference to invest the money in human resourcing of the police to build on its community work.
Conclusion
Even when public views on technologies appear clear-cut, when publics discuss specific socio-technical contexts, they reveal critical nuances and divisions. Northern Irish views (in the Ada-Turing AI Attitudes (2025) survey[141]) showed no significant differences from the UK trend; but Belfast’s example demonstrates the importance of cultural context and history, ground-level community relationships, and distrust in authority in shaping views about technologies and their applications. This case study points to the need for engagement and slow, careful work with local publics to build confidence and ensure a wider system of trustworthy relationships are in place before introducing technology into policing.
Lived experience of failing systems affects trust
Lived experience of underfunded health systems, and poor experience with digital health programmes or applications, meant people were less trusting that AI would be applied well in these contexts. Participants felt that digital tools had been applied as ‘cost-savers’, and ‘cut-throughs’ and they actually made services less accessible or useful for them.
For that reason, many participants expressed doubts that AI would be applied appropriately, or in a measured fashion, in health services. Some feared there would be an ‘all in’ approach that would undermine the core parts of the health system to apply care, such as investing in AI over investing in people that mattered, such as doctors or nurses. Another concern was that systems would move swiftly to a ‘diagnose me’ tool, which left people little choice but to accept automation.
Failing systems may drive public-sector administrators and managers to seek AI solutions. Our enquiry suggests that, for people who use these systems, lived experience of poor services and care in fact creates greater hesitancy and doubt that AI can be applied effectively and well, and improve services in ways that benefit them.
A clear example of how participants resisted or appeared at odds with these comes from our Brixton case study. Brixton’s communities experience endemic mental health challenges and crumbling mental health services, characterised by limited access and long waiting lists. Within this context, the people who engaged with our enquiry came to a unanimous decision during the workshop: ‘AI should not be used to diagnose mental health.’
‘Access to mental health services is often limited, with long waiting times, particularly affecting marginalised communities. Contributing to health inequalities in access to healthcare, especially for Black and minority ethnic communities, remain a significant issue.’
– Natoya, Brixton
Perception of uncaring systems underpins concerns
The potential of AI to dehumanise what people believed were already fundamentally uncaring systems and bureaucracies was also a strong reason for people to feel highly concerned about AI use and deployment in the public sector.
One young man, whose asylum application was going through, thought about Home Office decision-making and possible AI use for efficiency, to sift or make decisions on applications.
‘And when it comes to the Home Office, that does indeed scare me. See, being an asylum seeker is already a whole turmoil of sadness in itself. We know AI lacks the emotion and critical thinking of a human.’
– Belfast participant
Fragility of the social contract undermines optimism
Behind many of the concerns that participants raised when considering AI lay a sense that society was already under considerable strain. Many people felt that modern, digitally driven communications had already eroded social relationships and abilities to make meaningful connections with people. For them, AI would self-evidently ‘very easily divide and distance’ people, atomise society and segment generational differences, because this was already happening.
‘Modern life is trying to sever as many connections as possible.’
– Southampton participant
It was also very difficult for participants to believe that AI could contribute to a positive or fair society when existing inequalities were so egregious. For many participants, the current profit-driven systems, which characterised AI use, meant that any use of AI would simply magnify wealth disparities that they already lived with and by doing so exacerbate gentrification in Brixton (the ‘£1 million houses next to estates’), or landlords’ monopolies in Southampton. This, many felt, would only worsen with greater proliferation of AI technologies.
Likewise, communities with historic and structural experiences of discrimination were highly attuned to the ways in which biases were likely to be exacerbated and entrenched with AI tools in public services and in workplaces. They expected this to increase with greater AI use.
A lack of confidence in the current status quo was also a reason why a few people preferred technical, rather than professional or human, solutions to social challenges.
These views were perhaps most strongly pronounced over climate change and technology, where vested interests and human fallibility were both seen to have brought the world to the brink of environmental catastrophe.
‘We cannot regulate this earth, we’re ruining it. AI is our only saving grace because humanity isn’t working, and people are too greedy.’
– Belfast participant
Participants expressed similar sentiments in conversations about health care systems, where people sometimes referred to being let down by 999 callers, or GPs who they felt had given them inadequate care. Enthusiastic views on AI, therefore, could indicate where people felt badly let down by their existing support networks or public services.
Environmental and ecological impacts are red lines
‘The benefits of AI hold so much potential, but the risks it presents to our wellbeing through using huge amounts of fossil fuels and water may well accelerate our demise as a species.’ – Southampton participant
While most people were excited at the prospect that AI could help to address seemingly unsolvable challenges, like climate change, they were simultaneously dismayed at the potential for large-scale environmental damage from AI infrastructure. The learning phase could only give them top-level information on what kinds of impacts we might expect from widespread AI investments, such as water scarcity, minerals and raw materials depletion, and hazardous waste. Participants wanted clearer evidence about predicted environmental impacts of all choices to use AI-enabled technologies across society to understand what trade-offs they were being asked to make.
Based on their widespread strength of feeling towards environmental protections, we can state that these diverse publics believe that their perceived benefits of AI use are not sufficient to tolerate significant harms to the climate or environment at global or local level. And that people will want policymakers to provide detailed evidence regarding the impacts of AI use to reassure publics that their policies are not harmful.
Spotlight: AI and creativity
The relationship between AI and creativity has become one of the more fraught and polarised debates about the impact of AI-enabled technologies on human societies, bringing together fears about job loss and replacement of human labour, with more ethical concerns about the impact of technology on individual self-expression and social connection.[142] The Ada-Turing AI Attitudes (2025) survey showed that 38% of the public perceived enhancing creativity to be a benefit of large language models (LLMs), such as ChatGPT.[143] Our enquiry demonstrated that there may also be place-based and community-level drivers that influence people’s views one way or the other.
Context
We did not design a structured conversation about AI and creativity into this project, either in the workshops or during the online learning sessions. However, many of our participants thought this was an important dimension of public good and AI and brought this topic to the table in different ways, depending on the site and the range of lived experience and perspectives that each distilled.
Views
Brixton’s group expressed the strongest opinions. Art and creative life were fundamental parts of public good: making art and supporting artists were intrinsic to both community life and identity, providing a basis for social translation between different parts of Brixton’s multicultural society and promoting cohesion. The group as a whole made a clear decision that ‘AI should not be used to make art’ as part of their collective statements.
‘Sometimes art is pleasing and visual and beautiful. Often it’s social with layers of context. AI art has no meaning behind it.’
– Brixton participant
Southampton’s participants had a different set of priorities. Some of its creatives and musicians already used AI tools in the course of making art or music: either in production, for inspiration or in behind-the-scenes administration. They had, therefore, more entrepreneurial perspectives on the relationship between AI and creativity. Although they were still concerned about ‘over-use’ of AI and the impact on individual creative endeavours, they were far more open to the uses of AI in local creative action.
In Belfast, there was far less focus on the relationship between AI and creativity, and making art or music. However, neurodivergent participants highlighted the importance of AI for them, in enabling the creativity of neurodiverse minds to come through.
Conclusion
Our enquiry was a locus for a range of ideas about AI and creativity, and channelled core differences of perspectives that we have seen in the growing public debate about AI and creativity. However, the strength of opinion in Brixton suggests that AI may present particular challenges to local areas and communities, who may feel strongly that the current direction of AI development runs roughshod over their ways of living.
Brixton’s participants do in fact have very little power to change the technical course of AI development or evolving social norms over AI use in art-making. But there may be the ability to generate community-based norms that align with the needs and sentiments of community members. If – as this research evidences in Brixton – many members of a particular community felt similarly, this points to the need for a better understanding and options for different devolutions of choice in AI use.
Key takeaways: How AI and public good interact
AI for public good is not ‘one thing’: And people’s views are different from policymakers’. For these people, public good is a morally driven concept, not simply the object of innovation and economic growth.
Public good means AI works for everyone: This means life is fairer for everyone and people have higher standards of living, as well as lower energy and food bills.
Autonomy, care and relationality must be safeguarded: Efficiencies should enable people to have greater autonomy and freedom to nurture the things that give them meaning in their lives – including person-to-person care and connection.
AI for public good means solving big problems: AI use, for example in climate science, can help create more sustainable nations, societies and communities that waste fewer resources. AI for public good is also related to health science and diagnostics of major diseases. AI use could empower people and rebalance power more in the hands of communities.
But ‘AI for public good’ is elusive and tenuous: Distrust of decision-makers and lived experience of failing systems undermines confidence and affects trust. Perceptions of uncaring systems that can dehumanise people, and the fragility of the social contract, are held up against any optimism or belief that AI could contribute to a positive or fair society.
Within all these views, environmental impacts are red lines: People weighed optimism about AI’s potential to help address challenges like climate change against current and potential large-scale environmental damage from AI infrastructure.
What do publics expect to see?
How we evolved these expectations
This enquiry was not narrowly designed to scope a set of recommendations or consensus statements. Rather, it aimed to enable publics to take an expansive approach to ‘public good’ and AI, in their own words, and to affirm the place-based integrity of each of the sites.
As part of the final workshop, we asked community researchers to cohere a set of statements within the group, to express as far as possible expectations of what they wanted to see happen based on their conversations and experiences.
As in other elements of design, the groups approached this task differently and therefore produced evidence bases that cannot be easily synthesised. Belfast and Southampton, for instance, used more ‘legislative theatre’[144] techniques, where individuals addressed their expectations to ‘people in power’, and had these validated or supplemented by the wider group. Brixton chose to complete a set of statements as part of a group exercise, with a loose but collective process of validation for each of these.
We have drawn on these different strands of evidence and used thematic analysis to synthesise these and construct a set of expectations, which we believe can hold these ideas together. We presented them to participants in a findings workshop (10 March 2025) for feedback, providing an asynchronous option for review for those who were not able to attend. We received some constructive feedback, which is incorporated into expectations below.
Expectations
We present these constructed statements with reference to the various different aspects, requirements or sentiments that we gathered from the different sites.
‘A sentiment that really came across in their letters was “don’t be greedy”. Don’t let this all be about money. We know AI can be an added benefit, but if you’re only using this to make a profit, it’s not going to work.
– Paula, Belfast
Pro-social and equitable – AI should be public- and person-centred, and supportive of individual needs, talents and abilities.
AI technologies should be created in the context of:
Policy, regulation and decision-making
- a system of laws, regulation and safeguards that monitor and address harms from different technologies
- AI policies to safeguard information so that it is reliable, robust and trustworthy
- power sharing in AI deployment decision-making between corporations, institutions and state actors, and communities and localities
- space for everyone to collaborate and cooperate to ensure pro-social benefits from AI technologies
- AI at work that relieves workers from mundane tasks so they can flourish and contribute to their community and society.
- education that centres children’s abilities, and AI use that fosters and does not conflict with this.
Funding and infrastructure
- inclusive data systems, which recognise diversity and safeguard against minoritisation and discrimination
- agreements that corporations that make money from AI must contribute to public infrastructure
- Investment in new spaces or mechanisms to enable and amplify public voice and discussion
- access to AI learning so everyone can cultivate the critical thinking needed to understand AI.
Relational and ethical – AI should further human and community needs.
To ensure this:
- Corporations, institutions and state actors must reshape their strategic goals and prioritise people over profit or ideology.
- People who work with or who make decisions about AI should centre values such as empathy and relationality, rather than profit.
- AI should be guardrailed so it does not conflict with core values, our humanity or the ability to make meaning. Leaders, developers and corporations in the AI space must work harder to create trust with the public by centralising ethics and safety.
- Systems and software should be designed and tested with different publics in mind.
- All workers should share in the benefits of using AI in their workplace to raise productivity and efficiency, such as by a reduction in hours of work and greater family time.
- All workers should share in the benefits of using AI in their workplace to raise productivity and efficiency, such as by a reduction in hours of work and greater family time.
Future focused and ambitious – AI should advance humanity’s needs, and children and future generations should be considered.
To make this a reality:
- AI can and should be prioritised to find solutions to urgent and legitimate social goals and crises, such as climate change, disease and poverty.
- In deploying AI, leaders must be attuned to the long-term consequences and social ripple effects of their actions, and prevent elements of AI that may cause damage to future generations, such as environmental impacts.
- Tech corporations in the West should take a leading role in ensuring that the benefits of AI are extended to the Global South.
Responsibly deployed – AI should be used considerately, and only where necessary and effective.
To make this a reality:
- AI in public services should only be used where it can make appropriate and specific improvements, such as reducing wastefulness, and in ways that the public think tangibly benefit them.
- AI should actively reduce harm and discrimination, not add to harms or worsen discrimination.
- Any decision about the design and delivery of AI should be grounded in good and robust information and research, which publics must be able to review.
- We can do good through AI, but its rampant marketised growth will divide and disempower society: all decision-makers should be conscious of their power and responsibility to prevent that outcome.
Conclusion
This report has presented evidence of how publics think about their relationships to public good and AI through a qualitative and grounded approach, which provides both contextual richness and depth. Spending time with publics is a facility that only qualitative methods offer, producing research that has listened for the tonalities of people’s views, built an understanding of their needs and expectations, and explored why they believe what they do.
This is important because ‘public good’ is frequently referenced as a basis for AI policy-making. However, as this research demonstrates, there is not one agreed conception of either social or public good. It is a contested term that is used in relation to differing social, political and economic ideas. When policymakers use terms like public good, they must take into account the views, experiences and expectations of people from diverse backgrounds and reflect those in regulation, policies and the provision of ‘public goods’ like health, education or welfare.
People in this study imagined the relationship between ‘public good’ and AI in very different ways from policymakers.
The evidence presented here demonstrates that ‘public good’ is a morally and community-grounded concept for people, which centres equity and fairness for everyone.
While people are interested in innovation to boost economic growth, they are more deeply invested in ensuring foundations are built so that everyone can lead meaningful and purposeful lives.
The research presents a vibrant picture of what the AI revolution looks like, from the perspectives of people who are living in it. It identifies ‘emergence’ as an important context for views, and demonstrates how this can magnify uncertainties in and around AI use. Publics are in the process of ‘not quite living with AI’, as we show; this sense of both positive and negative anticipation – of the avalanche of changes that may be coming on the horizon – is commonly held.
Only a very few people felt confident about the futures they saw AI presenting. For most people, their observations about how AI technologies are currently managed and governed made them concerned that decision-makers would not design or deploy technologies in ways that centred public needs. This message is clear: trust in political and economic structures is currently fragile, and politicians and companies need to work harder to build confidence with publics.
In particular, public-sector administrators and managers may be driven by underfunded systems to seek AI solutions. This research suggests that, for people who use these systems, lived experience of poor services and care creates doubt that AI will be applied effectively, or improve services in beneficial ways. Experience of services, levels of civic engagement and awareness are unevenly distributed across the country. In the current funding context, local bodies may not have capacity to strengthen already strained civic relationships, or to build the necessary routes for co-production around AI.
The sophisticated perspectives from diverse publics detailed in this report should demonstrate to policymakers that publics are able to grasp fundamental and complex ideas about how AI works, both as technological tools and in socio-technical systems. People’s uncertainties, for instance, came from a perceptive reading of how the ecosystem is currently operating, rather than deficits of knowledge or capacity for interpretation.
The fact that they could simultaneously hold many different views about AI, too, speaks to their awareness of its technical facilities and multiple varieties of uses. Overall, the research demonstrates that ‘public good’ is a meaningful concept for people when engaging on societal issues, and that values and feelings associated with public good are robust even in the context of complex factors like AI technologies.
The ways in which qualitative evidence surfaces nuances and tensions inherent in perspectives also presents important context, and sometimes caveats, to the views that are articulated in surveys. For example, this report has surfaced deeper readings of facial recognition technologies for policing, when they are thought of in relation to specific cultural and political contexts. The evidence provides substance, too, to views about the benefits of efficiency or how it may be the wider distrust in structures and social relationships that underpins some people’s preferences for technology.
The overarching evidence of this report foregrounds the importance of public input and engagement in the AI revolution. There is a recognised need, evidenced through existing research, to locate the legitimacy of AI developments and deployments not in an abstract idea of ‘good’, but in context-specific views, concerns, hopes and expectations of publics.
Publics are sometimes represented as lacking critical perspectives or technical knowledge to make informed judgements about AI technologies, or being over-influenced by ‘unduly negative’ media stories, implying a lack of comprehension or capability to deal with AI’s complexity. This research – which provided space and time for distinct views to emerge, enables visibility of the sense and meaning they make in relation to what they encounter about AI.
Many of the study’s participants felt better prepared to benefit from, navigate and purposely contest any decisions about AI, because they had grown more confident in understanding what implications AI held for them, their communities and their values. They believed that everyone could better contribute to a good society, with AI deployment, if they were supported to develop this knowledge, which could form the basis for social as well as individual decision-making.
Recognising publics as having active agency in relation to social changes will be important in the AI revolution. Some of our participants shared that they see this knowledge as an extension of social action: they want to use it to advocate for others, and ensure AI works for everyone. They believe that ‘critical AI literacy’ should become an ongoing, institutionalised process that can and should engage publics in a collective thinking-through about what AI means for them. They want to see a more responsive, democratic engagement that gives them more facility to shape the directions of AI policy, as well as resist or reject its use in their lives.
This research, while small in scale, has a depth of insight that reinforces the case for better, more systematised or even institutionalised channels for public input and dialogue between publics and decision-makers in government and industry. The community grounding of this research, and the bridging role of the community researchers, demonstrate a productive pathway to further this objective. We should expect geography will situate people in the AI revolution, but locality and community also provide meaningful spaces for dialogue and co-production in this regard. Further research should develop relationships and knowledge of the locality of AI in the UK.
These spaces need adequate funding investment behind them. Our project was underpinned by UK-wide research investment into public voice for AI, but its place-basis may indicate participatory disparities already embedded into AI deployment into the UK. For example, the impetus to take part in this project for at least two pairs of community researchers came from observing developments that AI knowledge and economic infrastructure had generated in their local area, local universities or devolved policy. And, in one case, they had entered into conversation with these adjacent institutional networks at the same time as starting this project.
It is important that local spaces for conversations on AI are emerging through the interaction of institutions in deliberative systems, but there is also a risk of reinforcing geographic disparities that may already exist in relation to AI knowledge and infrastructure. There needs to be greater resource for publics to benefit from the same opportunities for dialogue around what’s good for them and their relationship with AI, including investment from policymakers across the UK to include these voices in decision-making.
Acknowledgements
This report was lead-authored by Eleanor O’Keeffe, with input and support from Octavia Field Reid. Roshni Modhvadia and Helena Hollis worked closely with community researchers in Southampton and Brixton, respectively, during co-design, fieldwork and co-analysis phases and contributed to the development of the report’s insights.
The community researchers were fundamental to shaping the research through co-design, participatory practice and the co-analysis that underpinned the findings presented here. The inputs of Anita Kambo, Natoya Whyte, Rae Turpin, Claudia Murg, Paula Quigley and Patrick Toland collectively and individually are recognised as co-authorship.
Emma Newbury from the Young Foundation provided regular support and input into our community research approach from October 2024 to February 2025.
We and AI, a non-profit organisation working to build critical thinking about AI in society, contributed their expertise to the information design and delivery of the learning phase. Particularly, Tania Duarte and Lizzie Remfry’s contributions helped us to realise the dialogic and enquiry-based learning aims for this project.
We have received ongoing input from the Public Voices in AI team and acknowledge the feedback from the People’s Panel and Connected by Data on research design and communication of our research findings.
The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC (Grant number: EP/Y009800/1). Support for the Ada Lovelace Institute’s work on the deliberative enquiry was provided by BRAID. BRAID is funded by AHRC (Grant number: AH/X007146/1).
Public Voices in AI was a collaboration between: the ESRC Digital Good Network at the University of Sheffield (Grant number: ES/X502352/1), Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute, and University College London.
Public Voices in AI team
- Helen Kennedy Professor of Digital Society, University of Sheffield. Director of the Digital Good Network
- Ros Williams, Senior Lecturer in Digital Media and Society, University of Sheffield. Associate Director of the Digital Good Network.
- Susan Oman, AI and In/equalities Lead, Centre for Machine Learning & Senior Researcher at the Digital Good Network, University of Sheffield
- Helen Margetts, Director, Public Policy Programme, Alan Turing Institute for Data Science and AI.
- Octavia Field Reid, Associate Director (Public participation & research practice), Ada Lovelace Institute.
- Jack Stilgoe, Professor of Science and Technology Policy, Dept of Science & Technology Studies, University College London.
- Eleanor O’Keeffe, Public Participation & Research Practice Lead, Ada Lovelace Institute.
- Roshni Modhvadia, Researcher, Ada Lovelace Institute.
- Mhairi Aitken, Ethics Fellow, Alan Turing Institute.
- Cian O’Donovan, Senior Research Fellow, Department of Science and Technology Studies, University College London.
- Tvesha Sippy, Researcher, Alan Turing Institute.
- Sara Cannizzaro, Postdoctoral Researcher, Public Voices in AI project.
- Ruth Lauener, Manager, Digital Good Network, University of Sheffield.
- Sarah Givans, Research Support Project Administrator, Digital Good Network, University of Sheffield.
Appendix 1: Research process
Community-led research
The project was grounded in the principles of peer or community-led research practice.[145] We engaged six community researchers to work across three sites – two each in Belfast, Brixton (London) and Southampton – and collaborated with them to develop a programme of work for 15 people from their respective communities, which engaged them to think about ‘public good’ before moving on to exploring how AI might interface with their lives.
Our community researchers brought a wealth of relevant experience and interests in domains such as health and social care, community organisation and environmental justice, the voluntary sector, journalism and community media, and social and health initiatives providing connections for local people. They were active in their communities, involved in their local schools, churches or campaigns for social justice.
Co-design process
We engaged in co-design[146] with the community researchers to action a ‘place-based’ approach throughout all phases of the project and ensure the design reflected the social and cultural needs of the communities we wanted to reach. All elements of the project went through a co-designed process of decision-making, from recruitment to workshop design, including all logistics of delivery, such as choice of venue and catering.
Our ways of working, which we co-designed with community researchers, ensured this underpinning. We met as a whole team for weekly two-hour co-design meetings, which ran for eight weeks from 18 October. Interspersed with these, smaller, local teams met more regularly for design planning and knowledge sharing.
In these workshops, we discussed how we could approach each stage of the research across all the locations, as well as how and where to make site-specific adjustments. In our first meeting, we established principles of co-design, which we used to inform broader decision-making. These included ensuring inclusivity and diversity, and actioning ethical approaches in engagements with participants.
Co-design decision-making worked across all aspects of the project, including:
- Recruitment: discussion on how to achieve the right sample for the research, and how best to communicate the opportunity to take part to the different local communities.
- Onboarding: ethical selection and engagement of participants and ensuring informed consent.
- Workshop design: discussion about the design of the structured workshops to elicit views on public good and AI.
- Logistics: ethical and community-centred approaches to venues, catering and structure of workshops.
- Information session design: discussion on how best to introduce AI subject matter to create a shared learning journey for participants.
- ‘Enquiry phase’: devising non-structured activities that participants would be invited to undertake outside of the structured online or in-person workshops.
Across all these components of the research journey, the community researchers were invited to share their thoughts and experiences, and to discuss these as a group to reach shared decisions that incorporated diverse expertises and insights.
Recruitment
Community researchers were encouraged to take different pathways to achieving an inclusive and diverse cohort through their knowledge and positionality within the community. We conducted recruitment design and delivery with reference to the following principles: place-based approach, amplifying voices, diversity of outlooks, fairness and equity.
Decisions about outreach and engagement reflected the community researchers’ positionality and experience, as well as their local social realities. Strategies differed slightly depending on area. Belfast’s community researchers shared a call for participation through the communications networks of Northern Ireland’s voluntary sector. Brixton’s community researchers focused their in-person approach on local public spaces, such as cafés and libraries. Southampton’s researchers combined in-person networking at community events, with social media engagement that reached out to local voluntary and social initiatives.
Prospective participants across all sites were asked to complete one online expression-of-interest questionnaire, to establish demographic information such as age range, gender, ethnicity, etc., as well as their availability to participate and any access needs.
The questionnaire also asked participants if they self-identified with any descriptions in a list of different groups that are often excluded from research:
- I have experienced exclusion because of my citizenship status.
- I identify my sexual or gender identity as LGBTQIA+.
- I identify myself as belonging to a minoritised ethnic identity.
- I identify myself as being disabled or having a disability.
- I identify myself as having experienced poverty.
- I am part of a group or community that is not listened to by those in power who make social or policy decisions.
Selection
We aimed to recruit 15 participants per location, but had a higher volume of expression-of-interest responses (33 in Belfast, 22 in Brixton and 39 in Southampton), which necessitated a selection process.
Our initial approach was for the Ada team to manage the selection process, to ensure fairness and transparency and to manage potential conflicts of interest. We agreed core criteria with the community researchers, which would balance the different demographic realities in each place, starting with ability to participate in all aspects of the project.
During the selection process, it became clear that the selection team had to be aware of potential conflicts of interest, and that community researcher knowledge of these networks was integral to making good decisions about group dynamics and the safety of the community research team.
We therefore revised the approach to a shared selection process, and worked with each pair of community researchers (in different ways that reflected local needs and interests) to congregate this group of 16 participants in each location, to meet the principles of our recruitment design.
Onboarding
Each selected participant was individually onboarded into the research project via a ten- minute telephone call with a community researcher or an Ada team member. The onboarding phone calls included an explanation of the research aims and description of the process, with sense-checking to ensure informed consent. We asked participants about their motivations for taking part, as well as their preferred modes of communication, access and dietary needs, and choice of remuneration method.
Fieldwork
In-person workshop dates were held as follows:
Brixton: initial workshop 26 November / final workshop 16 December
Belfast: initial workshop 30 November / final workshop 15 December
Southampton: initial workshop 30 November/ final workshop 14 December
Information sessions were held online, in between these workshop dates, with all participants together, on 3, 9 and 12 December. Holding the information sessions online as one group provided parity of information, ease of participation, and a sense of connection to the project as a whole across sites.
Each workshop deployed a combination of exercises, drawn from deliberative research[147], post-qualitative and arts-based methods[148], and community organising traditions. These included visioning[149] and scenario work,[150] group-based discussions, as well as forms of legislative theatre,[151] codesigned with the community researchers.
Learning phase
The learning phase was developed in partnership with the not-for-profit AI literacy organisation We and AI. It was designed to support diverse participants, who came with a range of experience and knowledge of AI, to develop some shared understandings and language to help them think collectively about AI and public good.
The sessions were designed to:
- Introduce core concepts about AI technologies, which we think are important in navigating these ideas about ‘good’, through the provision of accessible prompt materials, such as presentations.
- Introduce and action a bespoke, and accessible, critical thinking process, to give participants a structured and simple way to connect their vision of ‘good’ to these core principles.
- Support participants in small breakout groups to begin to connect these concepts in relation to their versions of ‘good’ through semi-structured discussion and facilitation.
The following table details the content of these sessions.
Session 1: What is AI? | |
Topics | Exercises |
AI in daily life | Plenary brainstorming exercise (Zoom chat) |
Definitions of AI – historical development of AI and its terminology | Presentation (plenary)
Q&A, Zoom chat feedback |
AI materialities: the systems, labour, and infrastructures that underpin AI development | Presentation (plenary)
Enquiry-based visual exercise (small group) – labelling AI’s component parts |
What is data? | Presentation (plenary) on what ‘counts’ as data and its relationship to AI |
Session 2: AI and society | |
How AI systems work | Presentation (plenary) on AI functionalities (predictive and generative AI)
Demo (interactive) of building predictive model with simple data classification (sorting face images by emotion) |
Is data in AI biased and why? | Presentation (plenary) with examples of where AI has demonstrated bias in relation to historical, social and geographical examples
Enquiry based exercise – interactive: exploring bias in generative AI through prompts. |
The AI Mirror: Whose values does AI reflect? | Presentation (plenary) on the AI mirror and who owns and controls AI deployment
Enquiry based interactive exercise (small group) – exploring values in generative AII platforms |
Session 3: AI and our Futures | |
AI isn’t neutral | Presentation (plenary) on emotion recognition technology
Presentation (plenary) with interactive Q&A on whose perspectives, interests and needs are omitted from AI design and development |
AI safety measures | Presentation (plenary) on governance and regulation options (EU, UK, USA) and ownership models
Game-based role play (small groups) on decision-making navigating trade-offs and unintended consequences
|
The core concepts we introduced our participants to were:
- AI isn’t artificial: demonstrating the many human decision-makers, and also human workers, involved in the development of AI, meaning these technologies do not exist separately from society. Furthermore, showing the material reality of AI systems which are not intangible.
- AI isn’t like a human: highlighting how AI does not ‘think’, and is not intelligent in the way we understand human beings to be. Highlighting ways that AI is built for specific tasks, rather than being able to apply knowledge to novel situations in the ways that humans can.
- AI isn’t always right: demonstrating the ways in which AI systems can generate false information or ‘hallucinations’. Furthermore, showing how AI outputs can be maladapted to the kinds of outcomes we would want.
- AI isn’t neutral: demonstrating the ways in which the people who design and deploy AI systems shape their functions and usage, translating human biases and motivations into the technology. Also considering how the data used to train AI is also limited, and interpreted through labelling, which also introduces biases.
Engaging people through enquiry
Our enquiry took a holistic approach to grasping how people (individuals, groups and communities) made sense of public good and AI, investigating them initially as separate concepts, before supporting participants to consider the relationship between them in the final workshop.
In addition to the variety of arts-based methods we incorporated in the structured events, our ‘enquiry’ mode aimed to go behind the scenes, and engaged a variety of ethnographically informed methods, to get to know more about participants and how AI appeared in their lives.
This included observational notes of conversational dynamics, from community and Ada researchers, but it also encompassed a range of methods that invited participants to respond to these topics as they wished. All participants were given a guided ‘self-led’ enquiry handbook, authored by Southampton community researcher Rae Turpin, which presented them with a number of ideas and options about how to record their experiences with AI in different ways, such as audio notes, interviews, diaries or email reflections. In addition, participants had the option to undertake a semi-structured interview with community researchers, if they preferred. Over one-third of the group completed and returned these contributions.
This multi-methods approach produced an evidence base that is multi-modal,[152] and incorporates verbal, textual and visual forms of evidence that have been generated in different ways throughout the project. This allowed us to balance the evidence from group exercises and discussions with person-centred communicative outputs when analysing sense-making across these outputs.
The approach had some limitations, however. We failed to engage every participant through these methods; there are people whose perspectives appeared more strongly in our evidence trace, even though the community grounding certainly helped us to engage people more meaningfully in this process.
Co-analysis process
Co-analysis was conducted with the community researchers, to ensure that the data generated through the research was interpreted and analysed through a lens of social knowledge and lived experience. As with the co-design process, this ensured that a place-based perspective was continuously present throughout the research.
There were three phases to the co-analysis process:
- Familiarisation – where the local teams reflected on and discussed the range of evidence and began to signify it in their separate place-based contexts.
- Reflections – where each member of the team took responsibility for a particular area and presented some analytic findings back to the group in their local teams.
- Conclusions – a final co-analysis workshop (three hours, online, 24 January 2025), where the research teams for each site shared their insights and drew together overarching insights from each other’s findings.
These discussions helped to clarify which insights were unique to location, and which were more commonly seen across all the sites, as well as to begin a process of signification about what findings were the most important and how we should articulate them. After the co-analysis workshop, community researchers fed into the further development of these insights asynchronously, through shared working documents.
Findings workshop
The final research event was the 90-minute findings workshop, which is described in the report. This took place online over Zoom on 10 March and ran in a similar way to the online learning sessions, including a short presentation on the insights developed from the project and description of the report’s structure. This included time for questions, requests for clarifications, or challenges in plenary, before moving on to explore the expectations .
The findings workshop gave participants the opportunity to test and challenge the insights we had developed. They gave constructive feedback on the place-based presentation of the qualitative findings to policymakers, as well as the ways in which the report might better flag participant concerns on negative environmental impacts. The feedback form remained open until 21 March, although we only received a further three contributions from those means.
Footnotes
[1] HC Deb 13 January 2024, vol 760, col 55. See: ‘Artificial Intelligence Opportunities Action Plan’ (Hansard, 13 January 2024) <https://hansard.parliament.uk/Commons/2025-01-13/debates/8C036071-5845-443C-B903-57483D552854/ArtificialIntelligenceOpportunitiesActionPlan> accessed 21 March 2025.
[2] Elliot Jones and Cansu Safak, ‘Can Algorithms Ever Make The Grade?’ (Ada Lovelace Institute, August 2020) <https://www.adalovelaceinstitute.org/blog/can-algorithms-ever-make-the-grade/> accessed 21 March 2025.
[3] Melissa Heikkilä, ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ (Politico, March 2022) <https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/> accessed 21 March 2025.
[4] Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[5] ‘AI Opportunities Action Plan’ (GOV.UK, January 2025) <https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan> accessed 21 March 2025.
[6] See, for instance, how Sam Altman, the CEO of OpenAI, sets out his vision in ‘Reflections’ (Sam Altman, 6 January 2025) <https://blog.samaltman.com/reflections> accessed 21 March 2025.
[7] Dario Amodei, ‘Machines of Loving Grace’ (Dario Amodei, 11 October 2024) <https://darioamodei.com/machines-of-loving-grace> accessed 21 March 2025.
[8] Carlota Perez, ‘What is AI’s Place in History’ (Project Syndicate, 11 March 2024) <https://www.project-syndicate.org/magazine/ai-is-part-of-larger-technological-revolution-by-carlota-perez-1-2024-03> accessed 10 March 2025.
[9] An example of this can be seen in Tim Davies and others, ‘Global Citizen Deliberation on Artificial Intelligence: Options and Design Considerations’ (Connected By Data, September 2024) <https://connectedbydata.org/assets/resources/Global%20Citizen%20Deliberation%20on%20Artificial%20Intelligence_%20Options%20and%20design%20considerations%20-%20Final%20draft%20-%20Sept%202024.pdf> accessed 10 March 2025.
[10] Michele E Gilman, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society, 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 20 March 2025.
[11] In 2016, the Science and Technology Committee appointed by the House of Commons recommended that there should be more ‘public dialogue’ on AI, although it did not suggest that the government should itself invest in a programmatic approach. The language of ‘public dialogue’ or emphasis on public engagement is absent from more recent pronouncements on AI policy in 2024. See: ‘Robotics and artificial intelligence: Fifth Report of Session 2016-2017’ (House of Commons, 13 September 2016) <https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf> accessed 12 March 2025.
[12] ‘Robotics and artificial intelligence: Fifth Report of Session 2016-2017’ (House of Commons, 13 September 2016) <https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf> accessed 12 March 2025.
[13] Corinne Cath and others, ‘Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach’ (2018) 24 Science and Engineering Ethics 505.
[14] David Leslie and others, ‘“Frontier AI,” Power, and the Public Interest: Who Benefits, Who Decides?’ (Harvard Data Science Review, September 2024) <https://hdsr.mitpress.mit.edu/pub/xdukxlpp> accessed 18 July 2024.
[15] Michele E Gilman, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society, 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 20 March 2025.
[16] HC Deb 13 January 2024, vol 760, col 55. See: ‘Artificial Intelligence Opportunities Action Plan’ (Hansard, 13 January 2024) <https://hansard.parliament.uk/Commons/2025-01-13/debates/8C036071-5845-443C-B903-57483D552854/ArtificialIntelligenceOpportunitiesActionPlan> accessed 21 March 2025.
[17] Gaia Marcus, ‘Ada Lovelace Institute Responds to AI Opportunities Action Plan’ (Ada Lovelace Institute, 13 January 2025) <https://www.adalovelaceinstitute.org/news/ai-opportunities-action-plan/> accessed 23 January 2025.
[18] Theresa Züger and Hadi Asghari, ‘AI for the Public. How Public Interest Theory Shifts the Discourse on AI’ (2023) 38 AI & SOCIETY 815 <https://link.springer.com/article/10.1007/s00146-022-01480-5> accessed 10 February 2025.
[19] ibid.
[20] National Audit Office,‘Use of Artificial Intelligence in Government’ (NAO, 2024) <https://www.nao.org.uk/wp-content/uploads/2024/03/use-of-artificial-intelligence-in-government.pdf> accessed 17 March 2025.
[21] Luciano Floridi and others, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 689.
[22] Medrado and P. Verdegem, ‘AI for Social Good? Inspirations from Participatory Action Research (PAR) to Critical Data Studies’ (University of Westminster, 9 May 2023) <https://westminsterresearch.westminster.ac.uk/item/w3050/ai-for-social-good-inspirations-from-participatory-action-research-par-to-critical-data-studies> accessed 10 July 2024.
[23] Nenad Tomašev and others, ‘AI for Social Good: Unlocking the Opportunity for Positive Impact’ (2020) 11 Nature Communications 2468.
[24] ‘Martin Tisné, ‘What is the best example you can think of where AI serves the public interest?’ (LinkedIn, January 2025) <https://www.linkedin.com/posts/martin-tisne_publicinterestai-alphafold-aiinnovation-activity-7287393501569318914-6r48> accessed 15 February 2025.
[25] Jeremy Kahn, ‘France, Tech Companies and Philanthropies Back New $400 Million Foundation to Support Public Interest AI’ (Fortune, 10 February 2025) <https://fortune.com/2025/02/10/france-tech-companies-and-philanthropies-back-400-million-foundation-to-support-public-interest-ai/> accessed 19 February 2025.
[26] Elizabeth Bondi and others, ‘Envisioning Communities: A Participatory Approach Towards AI for Social Good’, Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (Association for Computing Machinery, 2021) <https://doi.org/10.1145/3461702.3462612> accessed 19 July 2024.
[27] Lara Groves and others, ‘Going Public: The Role of Public Participation Approaches in Commercial AI Labs’ (arXiv, 16 June 2023) <http://arxiv.org/abs/2306.09871> accessed 21 March 2025.
[28] Abeba Birhane, ‘Bending the Arc of AI towards the Public Interest’ (AI Accountability Lab, 18 February 2025) <https://aial.ie/pages/aiparis/> accessed 25 March 2025.
[29] Eirini Malliaraki, ‘What Is This “AI for Social Good”?’ (Medium, 21 May 2019) <https://eirinimalliaraki.medium.com/what-is-this-ai-for-social-good-f37ad7ad7e91> accessed 21 March 2025.
[30] Jared Moore, ‘AI for Not Bad’ (Frontiers in Big Data, 11 September 2019) <https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2019.00032/full> accessed 21 March 2025.
[31] ‘AI Opportunities Action Plan’ (GOV.UK, 13 January 2025) <https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan> accessed 10 March 2025.
[32] HC Deb 13 January 2024, vol 760, col 55. See: ‘Artificial Intelligence Opportunities Action Plan’ (Hansard, 13 January 2024) <https://hansard.parliament.uk/Commons/2025-01-13/debates/8C036071-5845-443C-B903-57483D552854/ArtificialIntelligenceOpportunitiesActionPlan> accessed 21 March 2025.
[33] Elizabeth Waind, ‘Trust, Security and Public Interest: Striking the Balance: A Review of Previous Literature on Public Attitudes towards the Sharing, Linking and Use of Administrative Data for Research’ (2020) 5 International Journal of Population Data Science <https://ijpds.org/article/view/1368> accessed 2 March 2025.
[34] Ada Lovelace Institute and The Alan Turing Institute, ‘How Do People Feel About AI?’ (Ada Lovelace Institute, 6 June 2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-The-Alan-Turing-Institute-How-do-people-feel-about-AI.pdf> accessed 21 March 2025.
[35] Octavia Field Reid and others, ‘What Do the Public Think About AI?’ (Ada Lovelace Institute, 29 October 2023) <https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/> accessed 4 March 2025.
[36] Yasmin Ibison, ‘Artificial Intelligence for Public Good’ (Joseph Rowntree Foundation, 14 December 2023) <https://www.jrf.org.uk/ai-for-public-good/artificial-intelligence-for-public-good> accessed 23 July 2024.
[37] National Data Guardian, ‘Who do we mean by public benefit? Evaluating public benefit when health and adult social care data is used for purposes beyond individual care’ (National Data Guardian, 14 December 2022) <https://assets.publishing.service.gov.uk/media/6398e4a78fa8f55304b07d01/NDG_public_benefit_guidance_v1.0_-_14.12.22.pdf> accessed 16 July 2024.
[38] ‘Putting Good into Practice: A Public Dialogue on Making Public Benefit Assessments When Using Health and Care Data’ (GOV.UK, 14 April 2021) <https://www.gov.uk/government/publications/putting-good-into-practice-a-public-dialogue-on-making-public-benefit-assessments-when-using-health-and-care-data> accessed 16 July 2024.
[39] Fran Harkness, Cornelis Rijneveld, Yuncong Liu, Shayda Kashef and Mary Cowan, ‘UK Wide Public Dialogue Exploring What the Public Perceive as “Public Good” Use of Data for Research and Statistics’ (ADR UK, 2022 <https://www.adruk.org/fileadmin/uploads/adruk/Documents/PE_reports_and_documents/ADR_UK_OSR_Public_Dialogue_final_report_October_2022.pdf> accessed 10 December 2024.
[40] ‘How Statistics Can Serve the Public Good: A Think Piece’ (Office for Statistics Regulation, 7 February 2024) <https://osr.statisticsauthority.gov.uk/publication/how-statistics-can-serve-the-public-good-a-think-piece/> accessed 27 February 2025.
[41] Octavia Field Reid and others, ‘What Do the Public Think About AI?’ (Ada Lovelace Institute, 29 October 2023) <https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/> accessed 4 March 2025.
[42] Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[43] Octavia Field Reid and others, ‘What Do the Public Think About AI?’ (Ada Lovelace Institute, October 2023) <https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/> accessed 4 March 2025.
[44] See Susan Oman and Sara Cannizzaro’s forthcoming publication ‘People’s Feelings About AI: An Evidence Review’ (The University of Sheffield, 2025), available here: <https://digitalgood.net/dg-research/public-voices-in-ai/>.
[45] Anna Studman, ‘Access Denied?’ (Ada Lovelace Institute, September 2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2024/07/Ada-Lovelace-Institute-Access-denied.pdf> accessed 3 February 2025.
[46] Aidan Peppin, ‘The Citizens’ Biometrics Council’ (Ada Lovelace Institute, March 2021) <https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/> accessed 8 March 2025.
[47] See Susan Oman and Sara Cannizzaro’s forthcoming publication ‘People’s Feelings About AI: An Evidence Review’ (The University of Sheffield, 2025), available here: <https://digitalgood.net/dg-research/public-voices-in-ai/>.
[48] Jack Stilgoe and Tom Cohen, ‘Rejecting acceptance: learning from public dialogue on self-driving vehicles’ (2021) 48 Science and Public Policy 849–859 <https://doi.org/10.1093/scipol/scab060> accessed 20 March 2025.
[49] Clare Bambra, ‘Health Divides: Where You Live Can Kill You’ (Policy Press, 2016).
[50] James Banks, ‘Geography’ (2024) 3 Oxford Open Economics i582.
[51] Anna Studman, ‘Access Denied?’ (Ada Lovelace Institute, September 2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2024/07/Ada-Lovelace-Institute-Access-denied.pdf> accessed 3 February 2025.
[52] Isabella Pereira, Andrew McKeown and Iona Gallacher, ‘Public Perceptions of Inequality in the UK: A Summary of Key Findings from the Qualitative Research’ (2024) 3 Oxford Open Economics i88.
[53] Sunder Katwala talks about the importance of ‘geography’ in attending to this in ‘Lessons from Britain’s Riots for Resilience and Cohesion’ (Philea, September 2024) <https://philea.eu/opinions/lessons-from-britains-riots-for-resilience-and-cohesion/> accessed 24 March 2025.
[54] ‘Get Britain Working White Paper’ (GOV.UK, November 2024) <https://www.gov.uk/government/publications/get-britain-working-white-paper/get-britain-working-white-paper> accessed 21 March 2025.
[55] ‘Levelling Up the United Kingdom’ (GOV.UK, February 2022) <https://www.gov.uk/government/publications/levelling-up-the-united-kingdom> accessed 21 March 2025.
[56] ‘“Devolution Revolution” Forges Ahead with More Powers for Mayors’ (GOV.UK, December 2024) <https://www.gov.uk/government/news/devolution-revolution-forges-ahead-with-more-powers-for-mayors> accessed 11 March 2025.
[57] Ben Page, ‘Perceptions: The Fact and Fiction of Trust and Satisfaction’ (Local Government Association) <https://www.local.gov.uk/our-support/leadership-workforce-and-communications/comms-hub-communications-support/futurecomms-1> accessed 21 March 2025.
[58] ‘Democracy Made in England: Where Next for English Local Government?’ (Electoral Reform Society, March 2022) <https://www.electoral-reform.org.uk/latest-news-and-research/publications/democracy-made-in-england-where-next-for-english-local-government/> accessed 21 March 2025.
[59] Edward Scott, ‘Local Government and Local Democracy in England’ (House of Lords Library, June 2023) <https://lordslibrary.parliament.uk/local-government-and-local-democracy-in-england/> accessed 21 March 2025.
[60] ‘Save Local Services: Council Pressures Explained’ (Local Government Association) <https://www.local.gov.uk/about/campaigns/save-local-services/save-local-services-council-pressures-explained> accessed 21 March 2025.
[61] ‘English Devolution White Paper’ (GOV.UK, December 2024) <https://www.gov.uk/government/publications/english-devolution-white-paper-power-and-partnership-foundations-for-growth/english-devolution-white-paper> accessed 21 March 2025.
[62] Dr Beth W Kamunge, ‘Place and Health Inequalities: An Ethical Framework for Evaluation and Developing Policy’ (UK Pandemic Ethics Accelerator) <https://ukpandemicethics.org/wp-content/uploads/2022/04/Place-IF.pdf> accessed 17 January 2025.
[63] Sara Marcucci, Uma Kalkar and Stefaan Verhulst, ‘AI Localism in Practice’ (The GovLab) <https://files.thegovlab.org/ailocalism-in-practice.pdf> accessed 1 December 2024.
[64] ‘What Do We Mean When We Talk About a Good Digital Society?’ (The British Academy, 2024) <https://www.thebritishacademy.ac.uk/publications/what-do-we-mean-when-we-talk-about-a-good-digital-society/> accessed 25 February 2025.
[65] ‘People’s AI Stewardship Summit’ (Royal Academy of Engineering) <https://raeng.org.uk/policy-and-resources/engineering-policy/futures-and-dialogue/people-s-ai-stewardship-summit> accessed 4 July 2024.
[66] ‘Children’s AI Summit’ (The Alan Turing Institute) <https://www.turing.ac.uk/events/childrens-ai-summit> accessed 5 March 2025.
[67] Elizabeth Bondi and others, ‘Envisioning Communities: A Participatory Approach Towards AI for Social Good’, Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (Association for Computing Machinery, 2021) <https://doi.org/10.1145/3461702.3462612> accessed 19 July 2024.
[68] Ansarullah Hasas and others, ‘AI for Social Good: Leveraging Artificial Intelligence for Community Development’ (2024) 2 Journal of Community Service and Society Empowerment 196.
[69]Jessica Paddock, ‘A Place-Based Case Study Approach’ (Aspect) <https://aspect.ac.uk/wp-content/uploads/2021/05/5.-Plac-bas-Jessica-Paddock-A4-Guide-2.pdf> accessed 21 March 2025.
[70] Anita Gurumurthy and Nandini Chami, ‘The Wicked Problem of AI Governance’ (Social Science Research Network, 1 October 2019) <https://papers.ssrn.com/abstract=3872588> accessed 21 March 2025.
[71] Martín Carcasson and Leah Sprain, ‘Beyond Problem Solving: Reconceptualizing the Work of Public Deliberation as Deliberative Inquiry: Deliberative Inquiry’ (2016) 26 Communication Theory 41.
[72] Ministry of Housing, Communities and Local Government ‘The English Indices of Deprivation 2019’ (GOV.UK, 2019) <https://www.gov.uk/government/statistics/english-indices-of-deprivation-2019>
[73] Ollie Corfe, ‘Mapped: The Places in Each UK Nation Where Life Is Hardest’ (Express.co.uk, 1 October 2023) <https://www.express.co.uk/news/uk/1816964/most-deprived-parts-of-uk-map-spt> accessed 4 March 2025.
[74] Graham Brownlow, ‘What Is the Economic Legacy of Northern Ireland’s Troubles?’ (Economics Observatory, May 2021) <https://www.economicsobservatory.com/what-is-the-economic-legacy-of-northern-irelands-troubles> accessed 21 February 2025.
[75] Richard Brown, Charles Wilson and Yasmin Begum, ‘The Price We Pay: The Social Impact of the Cost-of-Living Crisis’ (National Centre for Social Research) <https://natcen.ac.uk/sites/default/files/2023-08/Society%20Watch%202023%20The%20Price%20we%20Pay%20V2.pdf>
[76] Clare Dyer, ‘Air Pollution from Road Traffic Contributed to Girl’s Death from Asthma, Coroner Concludes’ (2020) 371 BMJ m4902.
[77] Rory Carroll, ‘Overstretched Police Brace for Fresh Clashes in Belfast after Week of Riots’ (The Guardian, 9 August 2024) <https://www.theguardian.com/uk-news/article/2024/aug/09/overstretched-police-brace-fresh-clashes-belfast-week-riots> accessed 21 February 2025.
[78] Ross Marshall, ‘Muslim Council in Southampton speak out on protest’ (Daily Echo, 6 August 2024) <https://www.dailyecho.co.uk/news/24500381.muslim-council-southampton-speak-far-right-protest/> accessed 21 February 2025.
[79] ‘Lambeth Council Leader Condemns the Far-Right Violence Breaking out around the Country’ (Brixton Buzz, 6 August 2024) <https://www.brixtonbuzz.com/2024/08/lambeth-council-leaders-condemns-far-right-violence-and-exploitation-in-statement/> accessed 21 February 2025.
[80] Helen Kennedy and others, ‘Public Understanding and Perceptions of Data Practices: A Review of Existing Research’ (Living With Data, May 2020) <https://livingwithdata.org/project/wp-content/uploads/2020/05/living-with-data-2020-review-of-existing-research.pdf> accessed 21 February 2025.
[81] Sarah Pink and others, ‘Mundane Data: The Routines, Contingencies and Accomplishments of Digital Living’ (2017) 4 Big Data & Society 1.
[82] Helen Kennedy, Susan Oman and others, ‘Data Matters are Human Matters: final Living with Data report on public perceptions of public sector data uses’ (Living With Data, 2022) <https://livingwithdata.org/project/wp-content/uploads/2022/10/LivingWithData-end-of-project-report-24Oct2022.pdf> accessed 10 February 2025.
[83] Helen Kennedy and Rosemary Lucy Hill, ‘The Feeling of Numbers: Emotions in Everyday Engagements with Data and Their Visualisation’ (2018) 52 Sociology 830.
[84] Hannah Ditchfield and others, ‘What Ifs: The Role of Imagining in People’s Reflections on Data Uses’ (2024) 30 Convergence 6.
[85] Doreen Massey, ‘A Global Sense of Place’ in Timothy Oakes (ed) The Cultural Geography Reader (Routledge 2008).
[86] Michel Maffesoli, ‘From Society to Tribal Communities’ (2016) 64 The Sociological Review 739.
[87] ‘Northern Ireland’s Voluntary Sector Faces Critical Financial Pressure from NICs Increase’ (NICVA, 13 December 2024) <https://www.nicva.org/article/northern-irelands-voluntary-sector-faces-critical-financial-pressure-from-nics-increase> accessed 21 February 2025.
[88] Margaret McNulty, ‘Refugees in Northern Ireland’ <https://www.embraceni.org/wp-content/uploads/2012/09/Refugee-booklet-10.3.pdf>
[89] Southampton’s ex-military population is lower than the national average. However, the city’s connection with naval and military infrastructure over the centuries has created a longstanding infrastructure for veterans associational life and a pro-military civic culture. See: ‘Southampton Strategic Assessment (JSNA): Veterans’ <https://data.southampton.gov.uk/media/y2unlbto/veterans-page-content.pdf> accessed 21 February 2025.
[90] Marlys K Christianson and Michelle A Barton, ‘Sensemaking in the Time of COVID‐19’ (2021) 58 Journal of Management Studies 572.
[91] Virginia Braun, Victoria Clarke and Naomi Moller, ‘Pandemic Tales: Using Story Completion to Explore Sense-Making Around COVID-19 Lockdown Restrictions’ in Helen Kara and Su-Ming Khoo (eds), Researching in the Age of COVID-19: Volume III: Creativity and Ethics, vol 3 (Bristol University Press 2020) <https://www.cambridge.org/core/books/researching-in-the-age-of-covid19/pandemic-tales-using-story-completion-to-explore-sensemaking-around-covid19-lockdown-restrictions/B968A55186FD0241C3A296D0F4ABE75E> accessed 17 January 2024.
[92] Helen Kennedy and others, ‘Public Understanding and Perceptions of Data Practices: A Review of Existing Research’ (Living With Data, May 2020) <https://livingwithdata.org/project/wp-content/uploads/2020/05/living-with-data-2020-review-of-existing-research.pdf> accessed 21 February 2025.
[93] Sarah Pink, Emerging Technologies: Life at the Edge of the Future (1st edn, Routledge 2022).
[94] Steve Ballinger, ‘After the Riots’ (British Future, 11 September 2024) <https://www.britishfuture.org/after-the-riots/> accessed 21 March 2025.
[95] ‘Elon Musk’s Curious Fixation with Britain’ (BBC News, 22 December 2024) <https://www.bbc.com/news/articles/cy7kpvndyyxo> accessed 21 March 2025.
[96] Mark Sellman, ‘“Scheming” ChatGPT Tried to Stop Itself from Being Shut Down’ (The Times, December 2024) <https://www.thetimes.com/uk/technology-uk/article/chatgpt-o1-openai-prevents-own-deletion-tmvgbb7ls> accessed 4 March 2025.
[97] ‘Northern Ireland refugee statistics’ (Law Centre NI and Migration Justice Project, July 2023) <https://www.lawcentreni.org/wp-content/uploads/2023/07/LCNI-briefing-refugee-statistics-July-2023-1.pdf> accessed 12 February 2025.
[98] ‘Our History – Corrymeela’ <https://www.corrymeela.org/about/our-history> accessed 12 February 2025.
[99] For example: ‘Starling Collective’ (CommunityNI) <https://www.communityni.org/organisation/starling-collective> accessed 21 March 2025.
[100] Graham Brownlow, ‘What Is the Economic Legacy of Northern Ireland’s Troubles?’ (Economics Observatory, May 2021) <https://www.economicsobservatory.com/what-is-the-economic-legacy-of-northern-irelands-troubles> accessed 21 February 2025.
[101] ‘Brixton: A Queer History’ (LGBT HERO, 10 February 2023) <https://www.lgbthero.org.uk/brixton-a-queer-history> accessed 14 March 2025.
[102] Julian Reiss, ‘Public Goods’ in Edward N Zalta (ed), The Stanford Encyclopedia of Philosophy (Fall 2021 edn, Stanford University 2021) <https://plato.stanford.edu/archives/fall2021/entries/public-goods/> accessed 18 February 2025.
[103] Les Levidow and Theo Papaioannou, ‘State Imaginaries of the Public Good: Shaping UK Innovation Priorities for Bioenergy’ (2013) 30 Environmental Science & Policy 36.
[104] Maximilian Jaede, ‘The Concept of the Common Good’ (The British Academy) <https://www.thebritishacademy.ac.uk/documents/1851/Jaede.pdf>
[105] C Broom, ‘The Erosion of the Public Good: the implications of neoliberalism for democracy’ 10 Citizenship Social and Economics Education 140-146.
[106] Hazelkorn and Gibson, ‘Public Goods and Public Policy’ (Centre for Global Higher Education, May 2017) <https://www.researchcghe.org/wp-content/uploads/migrate/publications/wp18.pdf>
[107] Jennifer Cearns, Safeguarding Data The Data Consensus and the Public Good in Children’s Social Services’ 42 Cambridge Journal of Anthropology 1.
[108] Aitken, Porteous and Creamer, ‘Whose Benefit Is It Anyway?’ (International Journal of Population Data Science) <https://ijpds.org/article/view/833/750> accessed 9 March 2025.
[109] Barry Knight, ‘Rethinking Poverty: What Makes a Good Society?’ (Policy Press 2017) <https://www.degruyter.com/document/doi/10.56687/9781447340638/html> accessed 9 March 2025.
[110] A similar focus on equity and values related to compassion, in relation to ‘good society’, were seen in: Barry Knight, ‘Rethinking Poverty: What Makes a Good Society?’ (Policy Press 2017) <https://www.degruyter.com/document/doi/10.56687/9781447340638/html> accessed 9 March 2025.
[111] ‘Foundations for the Common Good’ (Caring to Change, March 2010) <http://www.p-sj.org/files/7.%20Caring%20to%20Change-Foundations%20for%20the%20Common%20Good.pdf> accessed 14 March 2025
[112] Fran Harkness, Cornelis Rijneveld, Yuncong Liu, Shayda Kashef and Mary Cowan, ‘UK Wide Public Dialogue Exploring What the Public Perceive as “Public Good” Use of Data for Research and Statistics’ (ADR UK, 2022) <https://www.adruk.org/fileadmin/uploads/adruk/Documents/PE_reports_and_documents/ADR_UK_OSR_Public_Dialogue_final_report_October_2022.pdf> accessed 10 December 2024.
[113] Mhairi Aitken, Carol Porteous and Emily Creamer, ‘Whose Benefit Is It Anyway? Public Expectations of Public Benefits from Health Informatics Research’ (2018) 3 International Journal of Population Data Science.
[114] ‘Foundations for the Common Good’ (Caring to Change, March 2010) <http://www.p-sj.org/files/7.%20Caring%20to%20Change-Foundations%20for%20the%20Common%20Good.pdf> accessed 14 March 2025
[115] Barry Knight, ‘Rethinking Poverty: What Makes a Good Society?’ (Policy Press 2017) <https://www.degruyter.com/document/doi/10.56687/9781447340638/html> accessed 9 March 2025.
[116] Daniele Rotolo, Diana Hicks and Ben R Martin, ‘What Is an Emerging Technology?’ (2015) 44 Research Policy 1827.
[117] Serhat Burmaoglu, Olivier Sartenaer and Alan Porter, ‘Conceptual Definition of Technology Emergence: A Long Journey from Philosophy of Science to Science Policy’ (2019) 59 Technology in Society 3.
[118] Daniele Rotolo, Diana Hicks and Ben R Martin, ‘What Is an Emerging Technology?’ (2015) 44 Research Policy 1827.
[119] Sarah Pink, ‘Emerging Technologies: Life at the Edge of the Future’ (1st edn, Routledge 2022).
[120] We and AI <https://weandai.org/> accessed 21 March 2025.
[121] ‘We Must Act on AI Literacy to Protect Public Power (Joseph Rowntree Foundation, 8 February 2024) <https://www.jrf.org.uk/ai-for-public-good/we-must-act-on-ai-literacy-to-protect-public-power> accessed 23 July 2024.
[122] Virginia S. Lee, ‘What is inquiry-guided learning?’ in Virginia S. Lee (ed), Teaching and learning through inquiry: A guidebook for institutions and instructors (Stylus Publishing 2004).
[123] Jack Stilgoe and David H Guston, ‘Responsible Research and Innovation’ (UCL Discovery) <https://discovery.ucl.ac.uk/id/eprint/10052401/1/Stilgoe_Guston_responsible_innovation_2017.pdf> accessed 10 March 2025.
[124] Martin W. Bauer and others, ‘What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda’ (2007) 16 Public Understanding of Science 1. Available at: https://doi.org/10.1177/0963662506071287
[125] Jack Stilgoe, Simon J Lock and James Wilsdon, ‘Why Should We Promote Public Engagement with Science?’ (2014) 23 Public Understanding of Science 4.
[126] Allison Woodruff and others, ‘“A Cold, Technical Decision-Maker”: Can AI Provide Explainability, Negotiability, and Humanity?’ (arXiv, 1 December 2020) <http://arxiv.org/abs/2012.00874> accessed 4 July 2024.
[127] Annette Markham, ‘The Limits of the Imaginary: Challenges to Intervening in Future Speculations of Memory, Data, and Algorithms’ (2021) 23 New Media & Society 382.
[128] Helen Kennedy and Rosemary Lucy Hill, ‘The Feeling of Numbers: Emotions in Everyday Engagements with Data and Their Visualisation’ (2018) 52 Sociology 830.
[129] Marian Barnes, ‘Passionate Participation: Emotional Experiences and Expressions in Deliberative Forums’ (2008) 28 Critical Social Policy 461.
[130] Ada Lovelace Institute, ‘Access Denied? Episode 2: The Emotional Life of Data’ <https://www.youtube.com/watch?v=KcopKvPijhw> accessed 19 June 2024.
[131] House of Lords, Select Committee on Artificial Intelligence, ‘AI in the UK: Ready, Willing and Able’ (2018) <https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf> accessed 15 January 2025.
[132] See Susan Oman and Sara Cannizzaro’s forthcoming publication ‘People’s Feelings About AI: An Evidence Review’ (The University of Sheffield, 2025), available here: <https://digitalgood.net/dg-research/public-voices-in-ai/>.
[133] See: Helen Kennedy and others, ‘Public Understanding and Perceptions of Data Practices: A Review of Existing Research’ (Living With Data, May 2020) <https://livingwithdata.org/project/wp-content/uploads/2020/05/living-with-data-2020-review-of-existing-research.pdf> accessed 21 February 2025; Helen Kennedy, Susan Oman and others, ‘Data Matters are Human Matters: final Living with Data report on public perceptions of public sector data uses’ (Living With Data, 2022) <https://livingwithdata.org/project/wp-content/uploads/2022/10/LivingWithData-end-of-project-report-24Oct2022.pdf> accessed 10 February 2025.
[134] There is a breadth of evidence highlighting that principles of equity, inclusion, fairness and transparency are important to the public. See: Octavia Field Reid and others, ‘What Do the Public Think About AI?’ (Ada Lovelace Institute, 29 October 2023) <https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/> accessed 4 March 2025.
[135] The project Living with Data found that people actioned these values in regard to others; they used the term ‘data solidarities’ to refer to this. See: Helen Kennedy, Susan Oman and others, ‘Data Matters are Human Matters: final Living with Data report on public perceptions of public sector data uses’ (Living With Data, 2022) <https://livingwithdata.org/project/wp-content/uploads/2022/10/LivingWithData-end-of-project-report-24Oct2022.pdf> accessed 10 February 2025.
[136] Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[137] Celia Nieto Agraz and others, ‘A Survey of Robotic Systems for Nursing Care’ (Frontiers in Robotics and AI, 2022) <https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2022.832248/full> accessed 21 March 2025.
[138] Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[139] Ansarullah Hasas and others, ‘AI for Social Good: Leveraging Artificial Intelligence for Community Development’ (2024) 2 Journal of Community Service and Society Empowerment 196.
[140] ‘Paramilitary Crime Task Force Seize Cash and Mobile Phones Following North Belfast Searches’ (PSNI, May 2024) <https://www.psni.police.uk/latest-news/paramilitary-crime-task-force-seize-cash-and-mobile-phones-following-north-belfast> accessed 22 February 2025.
[141] The Ada-Turing AI attitudes survey identified that 91% people (in the nationally representative UK sample) saw facial recognition technologies as broadly beneficial; 88% of the Northern Ireland sample said the same. These differences are not statistically significant, which means that we cannot say that these views diverge because of the local context. See: Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[142] ‘Is AI a Threat to Human Creativity? | Ethics in AI’ <https://www.oxford-aiethics.ox.ac.uk/ai-threat-human-creativity> accessed 21 February 2025.
[143] Roshni Modhvadia, Tvesha Sippy, Octavia Field Reid and Helen Margetts, ‘How Do People Feel About AI?’ (Ada Lovelace Institute and The Alan Turing Institute, March 2025) <https://attitudestoai.uk/> accessed 25 March 2025.
[144] Ana Isabel Nunes, ‘Legislative Theatre: How This Interactive Artform Empowers Communities to Create Social Change’ (The Conversation, 6 February 2025) <http://theconversation.com/legislative-theatre-how-this-interactive-artform-empowers-communities-to-create-social-change-247657> accessed 24 March 2025.
[145]‘What Is Peer Research?’ (Institute for Community Studies) <https://icstudies.org.uk/about-us/what-peer-research> accessed 30 August 2022; ‘Ten Principles of Peer Research’ (Peer Research Network) <https://www.youngfoundation.org/peer-research-network/about/ten-principles-of-peer-research/> accessed 24 March 2025.
[146] ‘Co-Design – Participedia’ <https://participedia.net/method/co-design> accessed 24 March 2025.
[147] ‘Deliberation – Participedia’ <https://participedia.net/method/560> accessed 24 March 2025.
[148] Jayanthi Lingham and Chloe Alexander, ‘Using Arts-Based Methods for Data Collection’ (the Centre for Care, 16 August 2023) <https://centreforcare.ac.uk/commentary/2023/08/using-arts-based-methods/> accessed 24 March 2025.
[149] ‘How to Run a Public Dialogue on Technologies That Don’t yet Exist? – It’s Never Too Early to Engage’ (Sciencewise, 9 December 2022) <https://sciencewise.org.uk/2022/12/how-to-run-a-public-dialogue-on-technologies-that-dont-yet-exist-its-never-too-early-to-engage/> accessed 24 March 2025.
[150] Rafael Ramirez and others, ‘Scenarios as a Scholarly Methodology to Produce “Interesting Research”’ (2015) 71 Futures 70.
[151] Ana Isabel Nunes, ‘Legislative Theatre: How This Interactive Artform Empowers Communities to Create Social Change’ (The Conversation, 6 February 2025) <http://theconversation.com/legislative-theatre-how-this-interactive-artform-empowers-communities-to-create-social-change-247657> accessed 24 March 2025.
[152] Bella Dicks, ‘Multimodal Analysis’ in Paul Atkinson and others (eds) (Sage 2019) <http://dx.doi.org/10.4135/9781526421036831970> accessed 24 March 2025.