It has been over a week now since users on X began en masse using the AI model Grok to undress people, including children, and the Elon Musk-owned platform has done next to nothing to address it. Part of the reason for that is the fact that, currently, the platform isn’t obligated to do a whole lot of anything about the problem.
Last year, Congress enacted the Take It Down Act, which, among other things, criminalizes nonconsensual sexually explicit material and requires platforms like X to provide an option for victims to request that content using their likeness be taken down within 48 hours. Democratic Senator Amy Klobuchar, a co-sponsor of the law, posted on X, “No one should find AI-created sexual images of themselves online—especially children. X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.â€
Note the “soon†in that sentence. The requirement within the law for platforms to create notice and removal systems doesn’t go into effect until May 19, 2026. Currently, neither X (the platform where the images are being generated via posted prompts and hosted) nor xAI (the company responsible for the Grok AI model that is generating the images) has formal takedown request systems. X has a formal content takedown request procedure for law enforcement, but general users are advised to go through the Help Center, where it appears users can only report a post as violating X’s rules.
If you’re curious just how likely the average user is to get one of these images taken down, just ask Ashley St. Clair how well her attempts went when she flagged a nonconsensual sexualized image of her that was shared on X. St. Clair has about as much access as anyone to make a personal plea for a post’s removal—she is the mother of one of Elon Musk’s children and has an X account with more than one million followers. “It’s funny, considering the most direct line I have and they don’t do anything,†she told The Guardian. “I have complained to X, and they have not even removed a picture of me from when I was a child, which was undressed by Grok.â€
The image of St. Clair was eventually removed, seemingly after it was widely reported by her followers and given attention in the press. But St. Clair now claims she was thanked for her efforts to raise this issue by being restricted from communicating with Grok and having her X Premium membership revoked. Premium allows her to get paid based on engagement. Grok, which has become the default source of information on this whole situation, despite the fact that it is an AI model incapable of speaking for anyone or anything, explained in a post, “Ashley St. Clair’s X checkmark and Premium were likely removed due to potential terms violations, including her public accusations against Grok for generating inappropriate images and possible spam-like activity.â€
Enforcement outside of the Take It Down Act is possible, though less straightforward. Democratic Senator Ron Wyden suggested that the material generated by Grok would not be protected under Section 230 of the Communications Decency Act, which typically grants tech platforms immunity from liability for the illegal behavior of users. Of course, it’s unlikely the Trump administration’s Department of Justice would pursue a case against Musk’s companies, leaving attempts at enforcement up to the states.
Outside of the US, some governments are taking the matter much more seriously. Authorities in France, Ireland, the United Kingdom, and India have all started looking into the nonconsensual sexual images generated by Grok and may eventually bring charges against X and xAI.
But it certainly doesn’t seem like the head of X and xAI is taking the matter all that seriously. As Grok was generating sexual images of children, Elon Musk, the CEO of both companies involved in this scandal, was actively reposting content created as part of the trend, including AI-generated images of a toaster and a rocket in a bikini. Thus far, the extent of X’s acknowledgement of the situation starts and ends at blaming the users. In a post from X Safety, the company said, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,†but took no responsibility for enabling it.
If anything, what Grok has been up to in recent weeks seems like it is probably closer to what Musk wants out of the AI. Per a report from CNN, Musk has been “unhappy about over-censoring†on Grok, including being particularly frustrated about restrictions on Grok’s image and video generator. Publicly, Musk has repeatedly talked up Grok’s “spicy mode†and derided the idea of “wokeness†in AI.
In response to a request for comment from Gizmodo, xAI said, “Legacy Media Lies,†the latest of the automated messages that the platform has sent out since it shut down its public relations department.
Original Source: https://gizmodo.com/heres-when-elon-musk-will-finally-have-to-reckon-with-his-nonconsensual-porn-generator-2000707799
Original Source: https://gizmodo.com/heres-when-elon-musk-will-finally-have-to-reckon-with-his-nonconsensual-porn-generator-2000707799
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
