Anticipating today
Co-designing new resources for policymakers to better anticipate societal and ethical impacts of emerging technologies
Reading time: 4 minutes
Project background: the challenge
‘Anticipatory thinking’ or ‘futures thinking’ in policy development is a critical skill for aiding the responsible governance of emerging technologies. It helps move the debate beyond current issues and helps embed long-term strategic thinking. It enables policymakers assess technologies in a timely way so that they can guide regulation or legislative strategies and explore alternative policy choices.
The UK Government Office for Science has published a Futures Toolkit which aims to support policymakers in their anticipatory work. However, policymakers and regulators are often pressured to focus on the issues of today, meaning there is little capacity and time for them to engage in longer-term thinking.
Even when time is allocated to futures thinking, methods often focus on short-term risks of a technology instead of the long- to medium-term societal and ethical impacts. Current methods do not explore alternative visions of how technologies can benefit different societal groups.
Taken together, this means policymakers are ill-equipped to robustly consider how different people in society might experience a technology or to anticipate what kinds of ethical or societal impacts might arise from its use.
Project aims
To address this challenge, Dr Federica Lucivero at the University of Oxford, in partnership with the Ada Lovelace Institute and the Nuffield Council on Bioethics, is developing a set of futures resources.
The aim is to improve the skills of UK public sector institutions, to help them anticipate social and ethical implications of emerging technologies and imagine alternative ways that technologies could benefit different societal groups.
These resources will help policymakers to better anticipate possible, likely and unlikely outcomes of policy decisions about emerging technologies. These insights will strengthen understanding, leading to regulation and policies that are proportionate and reflect population requirements.
This improved understanding is particularly needed for questions on the uses and regulations of AI technologies that are rapidly changing communities and policies, and that risk exacerbating existing inequalities.
Project objectives
- To investigate enablers and barriers to anticipatory methods among UK public sector institutions.
- To develop a set of practical resources that enable creative and thorough ethical and societal assessments of emerging AI technologies.
- To test these resources with two public sector sites/groups where anticipation of AI and technological impacts is required and refine them based on feedback.
- To disseminate and socialise the resources across wider relevant UK public institutions, increasing the uptake of anticipatory methods in policymaking.
Issues to be addressed
We believe these resources could help UK policymakers working in government, regulation, and local authorities to address several issues, such as:
- How can emerging AI systems be introduced in government or public services in a responsible way?
- What ethical or societal controversies might an emerging technology raise for different individuals or groups?
- How might different policy choices mitigate or address those issues?
Methods
This project will involve interviewing relevant stakeholders, particularly those working in AI policy and institutionalised foresight teams to build a good understanding of the complexities and nuances at play within the system.
It will also recruit two UK public sector institutions to assist in co-design, testing and development of the resources. This will ensure that what we create can effectively anticipate the societal impacts of an emerging technology in a way that meets organisational needs.
Key project partners
This project will be delivered by Dr Federica Lucivero (University of Oxford) in collaboration with the Ada Lovelace Institute and Nuffield Council on Bioethics. Dr Lucivero is joining the Ada Lovelace Institute on a Bridging Responsible AI Divides (BRAID) fellowship, which continues until December 2025.
Outputs
Our main project output will be the practical anticipatory resources for policymakers. This could include content such as diagrams, reflection cards, interactive questionnaires and a toolkit for hosting workshops.
We will publish our findings in a report, which will be hosted on partner websites.
We also plan to publish two academic papers summarising the findings of the project.
Contact
If you work in a UK public institution and want to know more or are interested in participating, please contact @Federica Lucivero, Andrew Strait (the Ada Lovelace Institute) or Sophia McCully (Nuffield Council on Bioethics).