Skip to content

Emerging Technology & Industry Practice

Ensuring that the benefits of new technologies are distributed equitably and that potential harms are prevented.

Who we are and what we do

Emerging Technology & Industry Practice builds evidence on the impacts of new AI and data-driven technologies on affected communities, studies the effectiveness of emerging AI governance and accountability practices, and ensures this evidence is reflected in AI policy debates. We want to help ensure that the benefits of these new technologies are distributed equitably and that potential harms are prevented.

Our work focuses on these core questions:

  • What are the societal implications of new and emerging technologies? How might these technologies impact different individuals and groups of people in positive and negative ways?
  • How can developers and deployers of AI technologies be held more accountable for the impacts their technologies will have on people and society?
  • What practices can policymakers and regulators put in place to ensure developers of AI technologies are accountable to the people impacted by their technologies?

We conduct research into emerging technology areas, business models and industry trends, regularly working with industry labs as a site of study. We also produce recommendations for industry and policymakers on better practices and processes for ensuring AI systems operate safely and legally. We use working groups and expert convenings, public deliberation initiatives, desk-based research and synthesis, policy and legal analysis, surveys, qualitative interviewing and ethnographic methods in our research.

The following principles guide our engagement with industry:

  • Openness and transparency: We conduct open and transparent research. If necessary, our convenings and research may grant anonymity to participating organisations and individuals, but our research, findings and recommendations will always be made public.
  • Independence and balance: Because we are independent of government and industry, we can determine the focus and content of our work and take a long-term view, mapping, observing and critically examining complex systems. We do not accept funding from, provide consultancy for, or act as contractors to the private sector.
  • Proximity to those affected: We work to close the gap between designers and developers of technologies and members of the public who are affected by them, particularly communities that are traditionally underrepresented and minoritised. We are increasingly using public participation research to acknowledge and account for the asymmetries of power between those building technologies and those affected by them.

Through our work, we build awareness and understanding within national governments of the risks of emerging technologies and what mitigations they can put in place to address potential harms. We develop and evaluate practices and interventions for improving accountability over AI and data-driven systems with key actors in the public and private sector.  And we shape norms and discourse in the technology industry around the societal impact of AI and data-driven systems.

What we are working on

We are currently working on the following projects:

Current project

AI and genomics futures

This joint project with the Nuffield Council on Bioethics explores how AI is transforming the capabilities and practice of genomic science.

Our impact

We want to see a world where developers of AI and data-driven technologies take the needs of people and society into account. Below are some ways in which our recent work has helped to move in that direction.

In 2023 we published two detailed reports – on monitoring AI and assessing AI risk – to directly support decision-making in emerging areas, commissioned through the Department for Digital, Culture, Media & Sport (DCMS) Science and Analysis R&D Programme. Through this work, Ada was able to fill research gaps identified by DCMS and inform the work of the policymakers in the department.

Our 2023 report Going public examined how commercial AI labs are involving the public to make AI systems more accountable. Through a series of interviews with industry professionals, we explored how these labs understand public participation, the approaches they take to include people in product development and deployment, and the obstacles they face when implementing these approaches. The paper was accepted into the Association of Computing Machinery’s Fairness, Accountability and Transparency in Machine Learning (FAccT) conference, and has been cited by the World Economic Forum and the New Statesman.

We also published Safe before sale in 2023, which explores what FDA-style regulation for foundation models could look like. The report identifies a number of general principles and recommendations for strengthening governance of foundation models, such as a pre-market approval process, pre-notification of large training runs, mandatory information sharing, giving regulators strong powers to access models, post-market monitoring and clarifying liability. We briefed the UK AI Safety Institute, EU AI Office and US government on its findings, several of which have been cited in recent policy proposals in these three regions.

In 2022, we worked with the BBC to explore the development and use of recommendation systems in public service media. In particular, we looked at how public service values are operationalised, what optimisation means in this context, the particular ethical considerations that arise, and how organisations seeking to serve the public can minimise risks and maximise social value.