Laws on deepfakes explained after Italian Prime Minister shares lingerie photo of herself

From boardrooms to classrooms, people everywhere are experiencing the consequences of the artificial intelligence boom. While the technology can streamline work and supercharge creativity, it has also enabled a disturbing new form of abuse.

With only a little know-how, bad actors can now take someone’s face and produce a sexualized deepfake. Over the last couple of years, the issue has spread rapidly—affecting everyone from Italy’s prime minister to schoolgirls targeted by classmates.

Earlier this week, Prime Minister Giorgia Meloni posted an example to her X account, warning that these AI images ‘can deceive, manipulate, and strike anyone’ and adding that ‘today it happens to me, tomorrow it can happen to anyone’.

Despite the fact that image-generation tools have been widely available for some time, lawmakers have struggled to keep up with this fast-growing wave of tech-enabled misogyny. That may now be starting to change.

Under European Union measures introduced in 2024, many AI image tools were required to include some form of disclosure indicating that a picture had been artificially generated—though there were carve-outs, including some political and satirical contexts.

However, those rules largely stopped short of imposing clear criminal penalties on people producing non-consensual sexualized deepfakes. That gap left room for “nudifier” apps and similar services to thrive.

These tools can generate sexualized AI images of real people, and they’ve increasingly been flagged by schools worldwide as a serious threat to female students.

On Thursday, families in Pennsylvania came together to demand stronger protections for their daughters after multiple students were targeted when deepfake images circulated within their school community.

One mom of an affected student said: “It’s an acute event that creates trauma immediately. It’s incredibly humiliating.”

In response, governments in various countries—as well as several US states—are moving to make the creation of these images explicitly illegal.

Within the European Union—often seen as a global pace-setter on tech policy—they have opted for a direct approach: banning “nudifier” apps outright.

Officials have also aimed to reduce ambiguity around what is permitted and what is prohibited by instructing all 27 member countries to treat the creation of sexually explicit deepfake imagery as a criminal offence.

Set to apply across Europe from December 2, the updated framework states: “Content becomes illegal when it is used for purposes such as non-consensual pornography, defamation, terrorist content, violations of privacy, financial fraud, breaches of electoral law, racist or xenophobic hate speech, or infringements of intellectual property rights.”

Companies that fail to comply can face major penalties—up to €35 million or 7 percent of a firm’s total worldwide annual turnover (whichever is higher)—if their services enable users to violate the rules.