The AI-made lingerie images of Giorgia Meloni may have caused outrage, but a leading deepfake researcher says the situation that enabled them has been building for years — and was never a surprise.
Dr Henry Adjer has spent close to eight years tracking the growth of non-consensual, AI-generated sexual content. His assessment of the current moment is stark: producing a deepfake has become simpler, faster, and more widely available than ever.
Legal changes, he argues, have not kept pace with the scale of what’s happening. Prosecutions remain rare compared to the volume of material being created and shared daily.
Asked whether Donald Trump’s TAKE IT DOWN Act is likely to significantly curb the creation of these images, Adjer said: “Nothing has really changed,, If anything it has got worse.”
When Meloni shared the AI-generated images of herself on X this week, she did something many victims can’t: she publicly pushed back, backed by a high profile and the ability to command attention online.
Adjer’s key point, though, is that cases like hers aren’t confined to public figures. The same kind of abuse is increasingly aimed at ordinary people, far from the headlines.
In fact, the technology used to create these images is already sitting in people’s pockets.

“It has never been easier to create deepfakes that non-consensually sexualise people,” he said.
He points to the rapid spread of so-called “nudification” services — many offered for free — that can run in a browser or on a phone and may only require a single image to generate a result.
Adjer describes today’s surge in this content as the outcome of four developments that have collided into what he calls a “perfect storm”.
First, the outputs look far more realistic than they did even a short time ago. Second, the systems are more efficient, meaning they need less data to produce convincing results. Third, access has exploded: the tools are now easier to use, often “gamified,” and designed to remove the friction that once required technical know-how.
Finally, he says the underlying capabilities have broadened well beyond basic face swaps. The ecosystem now includes full AI-generated videos, animated content, and — in a shift he believes is being underestimated — voice cloning.
“People can create synthetic, phone-sex style content using someone’s voices,” Adjer warned.

When Trump signed the Take it Down Act last year — and when Italy became the first EU country to criminalise deepfakes — it appeared to mark a turning point in how governments treat this kind of abuse.
The TAKE IT DOWN Act is a U.S. federal law signed on May 19, 2025, that criminalizes the non-consensual publication of intimate images – including AI-generated deepfakes and “revenge porn”. It mandates that online platforms implement notice-and-removal procedures to delete such content within 48 hours
Adjer doesn’t dismiss legislation, but he also warns against assuming laws alone will stop the problem.
One of his main clarifications is about scope: these laws typically don’t outlaw the underlying technology itself.
They don’t prohibit the tools, for example — Grok hasn’t been banned. Open-source software that can be downloaded, modified, and redistributed hasn’t been banned either. Instead, the crime is the act of using those tools to create and share non-consensual sexual content.

Where laws do help, he says, is by making the moral and legal status of the behaviour harder to deny. “It has helped signal more clearly to the general public that this is a form of sexual offending – you are a sex offender if you do this.”
In the UK, for instance, a conviction can mean jail time and registration as a sex offender. Adjer argues that clarity matters because online spaces that produce and trade this material have often tried to downplay it — treating it like a joke or a meme, as if a digital image can’t do real damage.
He rejects that logic completely.
“Just because this is AI-generated doesn’t mean it’s less harmful or that it’s not on the same spectrum as physical assault, harassment, or abuse.”
Enforcement, however, is where the gap becomes obvious. Offenders can act anonymously, operate across borders, and shift platforms quickly.
And the number of people actually being arrested and tried, Ajder says, ‘is a drop in the ocean compared to the number of cases happening on a daily basis.”
“It’s naive to expect we’ll ever be able to truly eradicate this problem; it’s endemic.

Even improved detection tools won’t automatically solve it, Adjer adds. On platforms that already host legal adult content, determining whether an image is AI-made isn’t enough — moderators still need to judge consent and whether the person depicted is real. At scale, that becomes an enormous human workload.
Still, he says one developing approach could offer meaningful protection for everyday users — even if it’s less effective for celebrities, whose faces already appear across vast amounts of public media.
It’s known as data poisoning.
According to IBM: Data poisoning is an adversarial cyberattack where malicious actors deliberately inject, manipulate, or delete training data to corrupt a machine learning model’s integrity. This “poison” causes AI to learn incorrect patterns, introducing backdoors or bias, resulting in unreliable performance or targeted malfunctions.
Applied defensively, the concept is to embed subtle signals into your photos — often imperceptible to humans — that disrupt AI systems if they attempt to train on those images. The goal is to make the model’s results degrade or fail.
“It’s almost like a shield or defensive layer; if social media platforms or dating apps built this into their tech, it could have a huge impact and protect ordinary people.”
For a young woman sharing photos on Instagram, he suggests, that kind of built-in protection could materially reduce risk.

Adjer ends by urging people to rethink how casually the internet often treats public figures. Fame, he says, doesn’t make someone immune — and it doesn’t reduce the harm when their likeness is exploited.
“Just because you’re famous doesn’t mean that these kinds of attacks and this kind of content doesn’t hurt and traumatise in the same way as they do for a private person,” he added.
And as Meloni herself put it this week, “I can defend myself. Many others cannot.”

