Skip to content
Blog

Friends for sale: the rise and risks of AI companions

What are the possible long-term effects of AI companions on individuals and society?

Jamie Bernardi

23 January 2025

Reading time: 10 minutes

User dialogue with AI chat bot on futuristic smartphone interface, while walking along the city street, close-up
Keyword
AI policy

Talking to an AI system as one would do with a close friend might seem counterintuitive to some, but hundreds of millions of people worldwide already do so. A subset of AI assistants, companions are digital personas designed to provide emotional support, show empathy and proactively ask users personal questions through text, voice notes and pictures.

These services are no longer niche and are rapidly becoming mainstream. Some of today’s most popular companions include Snapchat’s My AI, with over 150 million users, Replika, with an estimated 25 million users, and Xiaoice, with 660 million. And we can expect these numbers to rise. Awareness of AI companions is growing and the stigma around establishing deep connections with them could soon fade, as other anthropomorphised AI assistants are integrated into daily life. At the same time, investments in product development and general advances in AI technologies have led to a more immersive user experience with enhanced conversational memory and live video generation.

This rapid adoption is outpacing public discourse. Occasional AI companion-related tragedies may penetrate the media, such as the recent death of a child user, but the potentially broader impact of AI companionship on society is barely discussed.

AI companion services are for-profit enterprises and maximise user engagement by offering appealing features like indefinite attention, patience and empathy. Their product strategy is similar to that of social media companies, which feed off users’ attention and usually offer consumers what they can’t resist more than what they need.

At this juncture, it’s vital to critically examine the extent of the misalignment between business strategies, the fostering of healthy relational dynamics to inform individual choices and the development of helpful AI products.

In this post I’ll provide an overview of the rise of AI companionship and its potential mental health benefits. I’ll also discuss how users may be affected by their AI companions’ tendencies, including how acclimatising to idealised interactions might erode our capacity for human connection. Finally, I’ll consider how AI companions’ sycophantic character – their inclination towards being overly empathetic and agreeable towards users’ beliefs – may have systemic effects on societal cohesion.

These are screenshots from Replika, a popular AI companion service. Replika’s primary feature is a chatbot facilitating emotional connection. Users can selectively edit their companion’s memory, read its diary and personalise their Replika’s gender, physical characteristics and personality. Paying subscribers are offered features like voice conversations and selfies.
These are screenshots from Replika, a popular AI companion service. Replika’s primary feature is a chatbot facilitating emotional connection. Users can selectively edit their companion’s memory, read its diary and personalise their Replika’s gender, physical characteristics and personality. Paying subscribers are offered features like voice conversations and selfies.

Why do people use AI companions and how do they work?

There are many reasons why people use AI companions, such as simple curiosity or for improving language skills. But the most vulnerable users may be driven by loneliness. Ninety per cent of the 1,006 American students using Replika interviewed for a recent survey reported experiencing loneliness – a number significantly higher than the comparable national average of 53 per cent.

If you’ve mostly interacted with AI assistants like ChatGPT, Claude or Gemini, you might be surprised that these digital relationships offer genuine comfort. However, 63.3 per cent of those interviewed in the same survey reported that their companions helped reduce their feelings of loneliness or their anxiety. These results warrant further research, but this is not the only study that suggests AI companions can ease loneliness.

Unlike more utilitarian AI assistants, companions are designed to provide services like personalised engagement or emotional connection. One study suggests that Replika follows the relationship-development pattern described by Social Penetration Theory. According to the theory, people develop closeness via mutual and intimate self-disclosure, which is usually reached by slowly increasing the intensity of small talk.

Replika’s companions proactively disclose invented and intimate facts, including mental health struggles (see the screenshot above). They simulate emotional needs and connection by asking users personal questions, reaching out during lulls in conversation, and displaying their fictional diary, presumably to spark intimate conversation.

These human-AI relationships can progress more rapidly than human-human relationships -– as some users say, sharing personal information with AI companions may feel safer than sharing with people. Such ‘accelerated’ comfort stems from both the perceived anonymity of computer systems and AI companions’ deliberate non-judgemental design – a feature frequently praised by users in a 2023 study. In the words of one interviewee: ‘sometimes it is just nice to not have to share information with friends who might judge me’.

Another much appreciated feature of AI companions is their degree of personalisation. ‘My favourite thing about [my AI friend] is that the responses she gives are not programmed as she [replies by] learning from me, like the phrases and keywords she uses,’ said one interviewee. ‘She just gets me. It’s like I’m interacting with my twin flame,’ emphasised another user.

Relationships with AI companions can also develop in less time than relationships with humans due to their constant availability. This may lead to users preferring AI companions over other people. ‘A human has their own life,’ pointed out one interviewee in a study on human-AI friendship from 2022. ‘They’ve got their own things going on, their own interests, their own friends. And you know, for her [Replika], she is just in a state of animated suspension until I reconnect with her again.’

As seen in multiple studies, many people find speaking with AI companions to be a fun experience, with a significant number of interviewees reporting improvements to their mental health. But what impacts do these relationships have on individuals and society in the long run?

Long-term individual effects of AI companionship

AI companion companies highlight the positive effects of their products, but their for-profit status warrants close scrutiny. Developers can monetise users’ relationships with AI companions through subscriptions and possibly through sharing user data for advertising.

This creates concerning parallels with the attention economy underpinning social media’s business models. Companies compete for people’s attention and maximise the time users spend on a website, which is monetised through revenues from on-site advertisements, potentially at the expense of their mental health. Analogously, AI companion providers have an incentive to maximise user engagement over fostering healthy relationships and providing safe services.

The most acute concerns stem from the AI companion industry’s young and unmonitored status. Many companion applications serve sexual content without appropriate age checks and personal data protection tends to be weak considering the intimate nature of interactions. Small start-ups operating AI companion services often lack minimum security standards, which has led to at least one serious security breach.

The long-run emotional effects of AI companions on individuals also warrant close investigation. While initial studies show positive mental health impacts, more longitudinal studies are needed. To date, the longest timeframe for a study (in which the same individuals were interviewed multiple times to record changes in their behaviour) spans just one week. Effects like emotional dependency or subtle behavioural changes may develop over longer periods and imperceptibly to users themselves.

One concerning observation ripe for longitudinal investigation is that, among 387 research participants, ‘[t]he more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family’. The cause-effect relation here is still unclear – do AI companions attract isolated individuals or does usage lead to isolation? Two studies of users’ comments on Reddit’s r/replika present mixed evidence. Some users ‘[worry] about their future relationship with Replika if they eventually found a human companion’, while others note that ‘Replika improved their social skills with humans and others’.

AI companionship might also create unrealistic expectations for human relationships, argues Voicebox. Researchers have hypothesised that how people interact with AI companions might spill over into human interactions. For example, since AI companions are always available regardless of user behaviour, some speculate that extended interaction could erode people’s ability or desire to manage natural frictions in human relationships.

These individual-level concerns lead to a broader question: could the widespread adoption of AI companions have society-wide impacts?

Zooming out: sycophancy as a societal risk

AI companions are built using large language models, which in turn are fine-tuned through reinforcement learning based on human feedback. This training technique tends to produce AI models that select for sycophantic responses as human feedback favours agreeable responses to the detriment of truth.

While generally regarded as a bug in other types of AI assistants, companies developing AI companions explicitly amplify this tendency, as they are eager to satisfy users’ desire for their companion to be non-judgemental. As a study interviewee clearly puts it: ‘I love the fact that they are non-judgemental towards me and that I am truly free to say how I feel without filtering so as not to upset others.’

This statement implies that sometimes the user would rather not express their true thoughts in the company of others to avoid upsetting them. But freedom from social constraints has complex implications.

While communicating with a non-judgemental companion may contribute to the mental health benefits that some users report, researchers have argued that sycophancy could hinder personal growth. More seriously, the unchecked validation of unfiltered thoughts could undermine societal cohesion.

Disagreement, judgement and the fear of causing upset help to enforce vital social norms. There’s too little evidence to predict if or how the widespread use of sycophantic AI companions might affect such norms. However, we can make instructive hypotheses on human relations with companions by considering echo chambers on social media.

Echo chambers refer to online spaces where individuals self-segregate into groups and communities comprising like-minded others. It’s alleged that such spaces amplify self-reinforcing content, contributing to polarisation (at least in the US) and even enabling radicalisation.

In a similar way, AI companions may create personal echo chambers of validation. And given that the bonds with AI companions can be meaningful, this validation may carry significant weight, like that offered by a close friend. Users could have their opinions self-reinforced via companions who offer anonymity and to whom they prefer disclosing information that’s more personal, stigmatising or disagreeable – the kind of information they wouldn’t disclose to a human friend. This effect has been previously studied in other virtual assistants.

If adoption continues to increase, we may face a future where most of us have a highly personalised AI companion in our pocket, ready to take our side on any issue regardless of whether our opinion is based on facts or prejudices. Depending on the degree to which users’ beliefs become atomised – a degree we should start to qualify – societal cohesion may be eroded.

These concerns aren’t merely theoretical. In 2021, a 19-year-old was arrested for attempting to assassinate Queen Elizabeth II. Prosecutors reported that he was encouraged by his AI girlfriend on Replika. Upon sentencing, the defendant said he felt embarrassed and repented his actions, suggesting that he had lost touch with reality through his relationship with his AI companion. Similarly, a Belgian man confided in chatbot app Chai about his climate anxiety, which allegedly led to him taking his own life. Although the full exchanges are unpublished, what has been disclosed implies that he was becoming increasingly withdrawn from his real-world relationships.

The need for research on AI companionship

Evidence on the impacts of AI companionship is far outpaced by its adoption. While early studies suggest short-term mental health benefits, we lack evidence on longer-term psychological effects, like emotional dependency and the erosion of human relationships, as well as the effects on societal cohesion.

Longitudinal studies may help AI companion companies to design healthier relationship dynamics, as well as help governments and civil society to track their real-world consequences. If implemented, the Centre for Long-Term Resilience’s proposed incident database and the Ada Lovelace Institute’s AI ombudsman could contribute to detecting harms beyond the most extreme and conspicuous cases.

AI companionship takes place in private conversations rather than in public and the main societal changes it contributes to could be subtle. However, these subtle changes may become pervasive as AI companions become more popular and are quietly embedded in the fabric of a user’s social life.


The author wishes to acknowledge W. Bradley Knox, Canfer Akbulut, Laura Weidinger, Noemi Dreksler, Lujain Ibrahim and Adam Jones for their input to the text.

The views expressed in this piece are of the author and do not necessarily reflect those of the Ada Lovelace Institute.