Skip to content
Current project

How can (A)I help?

An exploration of AI assistants.

Project background

In the past year, AI ‘assistants’ have been released for a range of purposes, from carrying out practical tasks like web browsing and administrative support, to providing legal advice, mental health support and therapy, and companionship.

The term ‘AI assistant’ broadly refers to the use of a natural language interface (such as text or audio) to interact with a user in a human-like manner, and potentially take actions on behalf of a user.

More recently, the major technology companies – Amazon, Google, Meta, Microsoft and Apple – along with comparatively newer market entrants Anthropic and OpenAI, have stated their intention to create AI assistants.

At the same time, it is not yet clear how these companies can ensure these AI assistants are safe, reliable and accountable.

Without an assessment of these emerging technologies and their design choices, any gaps in regulation and governance which are not addressed could lead to serious harms.

It is crucial that policymakers, regulators and members of the public understand the trends in the design and deployment of AI assistants, and their potential benefits and risks.

This will enable policymakers to create the necessary regulatory and legislative frameworks for governance of these technologies.

It is possible that AI assistants could make our lives easier and more enjoyable. But there is also early evidence that the use of this technology comes with risks at both the individual and societal level, and the regulatory and policy landscape may not be equipped to govern these risks.

Project aims

In this project, we aim to understand how AI assistants work, how they are likely to develop over the next few years, and the benefits and risks associated with their use in various contexts.

We will draw on a combination of literature review, legal analysis, public participation research and expert interviews.

The outputs of this project will:

  • explain AI assistants, key terminology, and specific use cases and possible risks
  • identify the key actors in the value chain, from design and development, through to deployment
  • explore public attitudes towards, and experiences of AI assistants
  • highlight legal and regulatory gaps for harms caused by AI assistants, and recommendations for governance.

Image credit: Wachiwit