Advanced AI video generation may lead to a new era of dangerous deepfakes
What safeguards should be put in place to ensure people can control their image and likeness?
3 April 2025
Reading time: 4 minutes
When Taylor Swift became the target of AI-generated pornographic images that went viral in early 2024, the world took notice. Not because deepfakes were new, but because they had finally targeted someone powerful enough to force a conversation about their dangers.
Swift is one of many victims of deepfakes. In October 2023, an AI-generated video of climate activist Greta Thunberg advocating for the virtues of vegan grenades and biodegradable missiles circulated online. In July 2024, Spanish teenagers were put on probation after generating deepfake nudes of their classmates. The girls portrayed reportedly suffered from anxiety attacks and had been scared to come forward, worried they would be blamed for the images. In South Korea, teenage girls were targeted by explicit deepfake images and described similar anguish. In South Africa, a well-known TV presenter’s life has been disrupted after websites used her AI-generated likeness to promote scams.
These aren’t isolated incidents. As alarming as they are, these examples represent merely the first ripples of a much larger wave. Upcoming advances in AI video generation will make deepfake harms much worse, further deteriorating people’s ability to control their own likeness.
Current AI-generated videos exist in an uncanny valley, often betraying their artificial nature through subtle errors in physics, unnatural movements or inconsistencies that alert viewers that something isn’t quite right. Even OpenAI has acknowledged the limitations of their current systems. Up until now, these technical barriers have provided a thin layer of protection against complete digital impersonation.
But these protections are eroding rapidly. AI-generated videos are becoming more realistic, capable of longer-form outputs and improved physics. In December 2024, Google gave a sneak peek of its new model Veo to show how well it is already performing on short videos. The demo provided a glimpse of a near future where distinguishing between authentic and artificial videos will become nearly impossible for the average person.
The potential consequences go far beyond mere technological disruption. They have the potential to expand the infringement on individual autonomy already witnessed from crude or static deepfakes.
While AI-generated content has legitimate applications like creating controlled variations in images for social science research, the weaponisation of synthetic media technologies for harmful use is widespread.
With advanced AI video generation, the harms of deepfake pornography will evolve from static images to indistinguishable video ‘evidence’ of intimate moments that never occurred, creating lasting psychological trauma and reputational damage. And if history is anything to go by, this will disproportionately hurt women and girls.
These systems can cause harms spanning non-consensual imagery, revenge porn and child sexual abuse materials. They can also affect people’s jobs. For example, screen actors might lose out on work as they sign away rights to their own likeness out of financial necessity. Advanced AI video generation will also affect the media landscape, as an influx of highly realistic deepfakes will make it harder and harder to tell what is real on the internet.
The complete inadequacy of current safeguards makes the crisis more acute.
Existing technical safeguards for static deepfakes include input and output classifiers. The first check the prompts that users communicate to the system to generate video content and block or rewrite those considered to go against company policies. Output classifiers, instead, block content recognised as harmful. A common non-technical measure is limiting the access to a model to vetted buyers and users. Still, these methods are not enough to stop the production of unsafe images. Even with their vast resources and public commitments to safety, tech giants like Microsoft failed to prevent the generation of Taylor Swift deepfakes by their own models.
The safeguards in place for AI video generation are even more threadbare, as screening tools struggle to identify harmful content across frames of a video. At the same time, the present legal and policy tools are not sufficient to protect people against harms impacting their dignity. While governments across the world have started to pay attention to deepfakes, they still lack regulations, rights and mechanisms to effectively control them. This gap will become more problematic when advanced AI video generation tools proliferate – unless we act now.
There are several steps that can be taken immediately.
On the industry side, developers of video generation models should limit the availability of their systems. Rather than making them widely available to all users, they could offer them only to approved businesses under specific conditions for appropriate use, such as responsible AI licensing practices. In general, tech companies should restrict the release of models and their ability to generate videos of people, until regulators and external auditors have established safeguards for misuse and verified their adequacy.
At the same time, policymakers should pressure tech companies to only release models once these safeguards have been put in place and proven to be reliable and effective.
These are not perfect solutions, and likely won’t stop all developers from distributing their products. But they are steps to holding the tech industry accountable and to mitigate the considerable risks that AI video generation brings.
The fundamental principle at stake is one most people intuitively understand: that each of us should be able to control how our likeness is used and portrayed. Ask yourself: shouldn’t you be in control of how you appear to the world?
Related content
Beyond disinformation and deepfakes
Tracking the broader uses of AI during election campaigns