Skip to content
Blog

AI assistants

Helpful or full of hype?

Julia Smakman

13 August 2024

Reading time: 7 minutes

There has been considerable buzz around the development of ‘AI assistants’ in recent months, with major technology companies – like Amazon, Google, Meta, Microsoft, Apple, Anthropic and OpenAI – stating their intention to create AI assistants that mediate the entire digital experience of users of their products.

Some of these AI assistants are being marketed as ‘smart interns,’ capable of carrying out any number of general and administrative tasks. Other AI assistants have been developed for a specific purpose like providing mental health care, legal advice, or companionship, often built on top of another company’s ‘foundation model’.

It makes sense that technology companies would seek to further leverage the success of foundation models like GPT-4, Gemini and Claude through the development of AI assistants. The scale of this development has the potential to change how billions of people experience the internet and access increasingly digitised public services. However, the release of these assistants has outpaced the ability of companies and public bodies to evaluate and ensure they are safe, reliable, and accountable.

What is an AI assistant?

Providing a clear definition of an ‘AI assistant’ is a challenge. Often used interchangeably with the term ‘chatbot’ or ‘AI agent’, the term broadly refers to   use of a natural language interface (such as text or audio) to interact with its user in a human-like manner, and potentially take actions on a user’s behalf.

In this blog post, we will focus on the ways in which this new generation of AI assistants distinguishes itself from previous systems, like Apple’s Siri or Amazon’s Alexa.

First, they are powered by more advanced, multimodal foundation models that allow them to interact with users in a more human-like way. This can make it difficult for a user to distinguish a conversation with the AI assistant from interactions with a real person.

Second, AI assistants are increasingly capable of remembering more information about a user’s interaction history for a longer period of time. Having longer ‘context windows’ allows them to memorise a user’s interaction history – enabling them to offer an increasingly personalised experience over time.

Third, these systems are designed to be capable of taking more complex actions in digital space (such as booking an appointment or filling in a multi-part form). This is done through mechanisms like controlling a user’s web browser, or interacting with a website’s application programming interface (API).

Cashing in on foundation models

The development of advanced AI assistants is seen as a logical next step for major technology companies to take as they look to monetise the significant financial and resource investment that they have made in training foundation models.

Developers of AI assistants promise they can increase workplace productivity, improve access to professional services like mental health care, and even reduce costs by removing the need for human workers to achieve certain tasks. Many of these are the same claims made about earlier AI technologies. Currently, there is scant evidence backing up these claims.

That hasn’t stopped major companies from leveraging their existing market power – from social media platforms, mobile and computer devices, compute hardware, and cloud business-to-business services – to integrate AI assistants as a new ‘intermediary’ for existing online products and services used by billions worldwide.

They may replace older generations of AI assistants like Siri or Alexa that users are already comfortable with, which were significant drivers of profit for Apple and Amazon. These advanced assistants offer the ability to capture more data on a user’s behaviour and activity to use for targeted advertising.

Several smaller companies are building purpose-specific AI assistants for services like mental health care, financial advice, and legal services using foundation models from major companies  like OpenAI’s GPT-4. For at least some of these services, there is a significant uptake. An AI companion, Chai, reports 4 million monthly users. Harvey.AI, a legal AI assistant, has received millions in investments and is being used by various big law companies.

False promises?

Technology companies claim some AI assistants can support personal administrative tasks or lifestyle improvements – from helping someone brainstorm, to keeping track of appointments, to suggesting healthier or more energy-efficient behaviours. Other assistants might be deployed to augment the delivery of key services or relationships, replacing therapists, personal companions, lawyers or financial advisors, and civil servants. However, currently, it is unclear to what extent the promises around AI assistants will materialise.

There are significant technical challenges that companies would need to overcome to create reliable, functional assistants capable of the many tasks that companies claim they can achieve. This includes ensuring interoperability between different agents so they do not malfunction or ‘crash’ a service they are interacting with.

And crucially, there are serious concerns that the technology powering AI assistants cannot reliably replicate the professional norms, sensitivities and safeguards involved in many human-to-human professions.

Risks: Small and large scale

The use of advanced AI assistants on a large scale carries risks for individual people and for society as a whole.

At an individual level – because of their ability to convincingly imitate a human and collect data to personalise content – these systems could manipulate users, or create an overreliance on a fallible technology.

This could have serious implications for privacy and autonomy.  The collection of data over time introduces significant privacy risks, as privacy policies may not be clear on what a user’s data may be used for and how long it is retained. The types of information shared with an engaging conversational agent – such as one used in healthcare – may well be more sensitive, intimate or detailed than data presently collected and inferred through tracking browsing behaviour.

The personalised experience created by the collection of this data may also make it harder for people to stop using or switch their AI assistant, because it can create an emotional or material dependence. Last year, AI companion Replika was temporarily banned by the Italian data protection authority because of its risks to minors and vulnerable individuals, and for GDPR violations.

In addition, the personal information that an AI assistant may gather about a user could be used to (hyper)nudge them towards certain consumer behaviours that are deceptive; may not be in their interests; or even harmful to their wellbeing.

At a systemic level, there are economic and competition risks that may arise. AI assistants may challenge the future of how media and information are delivered in online environments, creating another paradigm shift akin to the launch of social media and its impact on the media advertising market.

There are also risks that arise from the use of AI assistants to mediate traditionally sensitive human-to-human relationships, as in the case of providing legal advice or mental health care. These are domains that have developed decades of professional norms and practices to seek to prevent catastrophic accidents and provide redress for people who have been harmed. Already, there have been cases of a mental health helpline chatbot giving harmful dieting advice to people with eating disorders, and a New York City chatbot giving erroneous advice to commit illegal acts to local business owners.

It will be essential to determine through rigorous testing whether these systems work as intended in these areas, and whether new legal protections or mandatory design requirements will be needed. It will also be vital to study the impacts of these systems on these specific sectors – it may be that advanced AI assistants are simply not suitable for these sensitive contexts.

Open questions

Members of the public and policymakers may be struggling to conceptualise what a future may look like with AI assistants intermediating our digital environments. But rather than await their arrival, it will be crucial to assess whether the UK’s regulatory and legal landscape is mature enough to govern the risks these technologies present.

Without an assessment of these emerging technologies, their design choices, and the contexts they will operate in, there may be gaps in regulation and governance that could lead to serious harms.

It is possible that AI assistants may make our lives easier, better and more enjoyable. But we cannot take technology companies at their word. It is crucial that policymakers, regulators and the public understand the trends in the design and deployment of AI assistants, and their potential benefits and risks.


In the coming months, Ada will be exploring questions around how AI assistants work; the benefits and risks of their use in various contexts; and how our legal and policy landscape should adapt to address any challenges introduced by their introduction.

If you’re interested in the work of this project, please email us.


Image credit: EyeEm Mobile GmbH