Elon Musk’s former partner issues chilling warning amid ‘urgent’ Grok investigation into indecent images

Ashley St Clair has issued a concerning warning about the future of AI as investigations are underway into Elon Musk’s Grok generating inappropriate images involving children and women.

St Clair first gained public attention last year following her claims of having given birth to Musk’s thirteenth child, named Romulus. According to reports, St Clair and Musk met for the first time in spring 2023, and their relationship developed romantically after St Clair was invited to Musk’s San Francisco office of X (formerly known as Twitter). During a New Year’s trip to St. Barts, St Clair alleges they conceived a baby boy together, as reported by The Wall Street Journal. A public dispute over child support has ensued, but St Clair is now focusing on expressing her worries about Musk’s AI chatbot, Grok.

Grok was launched in November 2023 and immediately drew attention due to its NSFW ‘spicy’ mode, which engaged in explicit conversations with users when provoked.

Recently, users have been using Grok to create explicit images of women without their consent.

Last month, Grok’s X account even issued an apology after it was responsible for generating and sharing an AI image of two young girls, estimated to be between 12 and 16 years old, in sexualized clothing based on a user’s request.

Other social media users have reported that their own photos have been altered by Grok’s capabilities to appear sexualized.

In response, the UK communications regulator, Ofcom, has initiated an ‘urgent’ investigation due to ‘serious concerns’ that Grok is generating ‘undressed images of people and sexualized images of children’.

“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK,” stated Ofcom in their announcement on Monday (January 5).

“Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.”

St Clair, who alleges that a photo of her at 14 was manipulated by someone on X to depict her ‘removing her clothes’, has raised alarms over this troubling development.

“When Grok went full MechaHitler, the chatbot was paused to stop the content,” she wrote on X.

“When Grok is producing explicit images of children and women, xAI has decided to keep the content up + overwhelm law enforcement with cases they will never solve with foreign bots and take resources from other victims.

“This issue could be solved very quickly. It is not, and the burden is being placed on victims.”

St Clair then made a foreboding prediction about AI’s trajectory, noting: “They are going to use this to push for Section 230 like protections for AI. And it should not be allowed to happen.”

In the United States, Section 230 of the Communications Decency Act of 1996 offers legal protection to online platforms from being held liable for content produced by users, as highlighted by the University of Chicago Business Law Review.

This law ensures that websites, social media companies, and online services are not treated as ‘publishers’ or ‘speakers’ of third-party content.

Platforms are also shielded when they moderate or remove content in ‘good faith’.

While this legislation guards platforms against many civil liability claims, it does not protect them from federal criminal laws, copyright issues, and specific privacy and child protection regulations.

Nonetheless, if Section 230 or a similar law provides protection to X and xAI, it could set a concerning precedent for victims of generative AI in the future.

Even though this law is specific to the US, and Ofcom’s investigation occurs in the UK, the country has its own legislation, the Online Safety Act, which imposes duties of care on platforms and fines them for failing to remove harmful content. The outcome of Ofcom’s investigation and the potential repercussions for xAI have yet to be determined.

UNILAD has reached out to xAI and X for their comments.