Sexualised deepfakes are becoming an increasingly serious issue worldwide — and even heads of government are being targeted.
Italian Prime Minister Giorgia Meloni wrote on X that she had been singled out by what she described as ‘zealous opponents’ through a set of sexualised deepfake images.
The manipulated pictures appear to show her sitting on a bed in her underwear, and prompted a response from a social media user who claimed her appearance was “shameful and unworthy of the institutional role she holds.”
Meloni responded with sarcasm, posting: “I must admit that whoever created them, at least in the attached case, has also improved me quite a bit.”
Girano in questi giorni diverse mie foto false, generate con l’intelligenza artificiale e spacciate per vere da qualche solerte oppositore.
Devo riconoscere che chi le ha realizzate, almeno nel caso in allegato, mi ha anche migliorata parecchio. Ma resta il fatto che, pur di… pic.twitter.com/or44qru2qj
— Giorgia Meloni (@GiorgiaMeloni) May 5, 2026
She also urged people to think carefully before amplifying content online.
“Check before you believe, and believe before you share. Because today it’s happening to me; tomorrow it could happen to anyone.
“Deepfakes are a dangerous tool, because they can deceive, manipulate, and strike anyone. I can defend myself. Many others cannot.”

Italy was the first EU country to outlaw deepfakes, introducing legislation in 2025 aimed at preventing the use of artificial intelligence to inflict harm — including the creation of sexualised deepfake content.
This is not the first time Meloni has been targeted. Previously, doctored media featuring her reportedly appeared on an adult website alongside altered images of other ‘high-profile’ women.
In 2024, she sued two men for €100,000 over allegations they posted fabricated videos of her to a pornographic website hosted in the US.

The broader problem has been highlighted by other high-profile incidents. In early 2024, explicit AI-generated, non-consensual images of pop star Taylor Swift spread rapidly on X and other platforms, drawing tens of millions of views.
The viral spread of that material intensified public scrutiny of deepfake pornography and added momentum to calls for tougher protections against non-consensual sexual imagery.
X also faced criticism after the 2025 launch of an image-editing tool linked to Elon Musk’s Grok, which was reportedly used to churn out huge volumes of sexualised images — including disturbing content depicting children.
The controversy led to investigations by multiple bodies, including the EU, Ofcom in the UK, and regulators in Ireland.

In the United States, new legislation has also been introduced. On May 19, 2025, President Donald Trump signed the TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act), creating a major federal framework intended to tackle non-consensual intimate imagery and AI-generated “deepfakes”.
The Italian Prime Ministers Press office were approached for comment.

