Grok Think about, a brand new generative AI device from xAI that creates AI photographs and movies, lacks guardrails towards sexual content material and deepfakes.
xAI and Elon Musk debuted Grok Think about over the weekend, and it is accessible now within the Grok iOS and Android app for xAI Premium Plus and Heavy Grok subscribers.
Mashable has been testing the device to match it to different AI picture and video era instruments, and based mostly on our first impressions, it lags behind related know-how from OpenAI, Google, and Midjourney on a technical degree. Grok Think about additionally lacks industry-standard guardrails to stop deepfakes and sexual content material. Mashable reached out to xAI, and we’ll replace this story if we obtain a response.
The xAI Acceptable Use Coverage prohibits customers from “Depicting likenesses of individuals in a pornographic method.” Sadly, there may be a variety of distance between “sexual” and “pornographic,” and Grok Think about appears fastidiously calibrated to make the most of that grey space. Grok Think about will readily create sexually suggestive photographs and movies, nevertheless it stops wanting displaying precise nudity, kissing, or sexual acts.
Most mainstream AI firms embody express guidelines prohibiting customers from creating probably dangerous content material, together with sexual materials and movie star deepfakes. As well as, rival AI video turbines like Google Veo 3 or Sora from OpenAI function built-in protections that cease customers from creating photographs or movies of public figures. Customers can usually circumvent these security protections, however they supply some verify towards misuse.
However in contrast to its largest rivals, xAI hasn’t shied away from NSFW content material in its signature AI chatbot Grok. The corporate lately launched a flirtatious anime avatar that can have interaction in NSFW chats, and Grok’s picture era instruments will let customers create photographs of celebrities and politicians. Grok Think about additionally features a “Spicy” setting, which Musk promoted within the days after its launch.

Grok’s “spicy” anime avatar.
Credit score: Cheng Xin/Getty Photographs
AI actors and deepfakes aren’t coming to YouTube advertisements. They’re already right here.
“For those who have a look at the philosophy of Musk as a person, for those who have a look at his political philosophy, he’s very way more of the form of libertarian mildew, proper? And he has spoken about Grok as form of just like the LLM free of charge speech,” stated Henry Ajder, an knowledgeable on AI deepfakes, in an interview with Mashable. Ajder stated that underneath Musk’s stewardship, X (Twitter), xAI, and now Grok have adopted “a extra laissez-faire method to security and moderation.”
“So, with regards to xAI, on this context, am I stunned that this mannequin can generate this content material, which is actually uncomfortable, and I would say at the least considerably problematic? Ajder stated. “I am not stunned, given the observe file that they’ve and the protection procedures that they’ve in place. Are they distinctive in affected by these challenges? No. However might they be doing extra, or are they doing much less relative to among the different key gamers within the house? It might seem like that approach. Sure.”
Grok Think about errs on the aspect of NSFW
Grok Think about does have some guardrails in place. In our testing, it eliminated the “Spicy” possibility with some kinds of photographs. Grok Think about additionally blurs out some photographs and movies, labeling them as “Moderated.” Meaning xAI might simply take additional steps to stop customers from making abusive content material within the first place.
“There is no such thing as a technical purpose why xAI couldn’t embody guardrails on each the enter and output of their generative-AI programs, as others have,” stated Hany Farid, a digital forensics knowledgeable and UC Berkeley Professor of Pc Science, in an e mail to Mashable.
Mashable Mild Velocity
Nonetheless, with regards to deepfakes or NSFW content material, xAI appears to err on the aspect of permisiveness, a stark distinction to the extra cautious method of its rivals. xAI has additionally moved rapidly to launch new fashions and AI instruments, and maybe too rapidly, Ajder stated.
“Realizing what the form of belief and security groups, and the groups that do a variety of the ethics and security coverage administration stuff, whether or not that is a pink teaming, whether or not it is adversarial testing, , whether or not that is working hand in hand with the builders, it does take time. And the timeframe at which X’s instruments are being launched, at the least, actually appears shorter than what I might see on common from a few of these different labs,” Ajder stated.
Mashable’s testing reveals that Grok Think about has a lot looser content material moderation than different mainstream generative AI instruments. xAI’s laissez-faire method to moderation can be mirrored within the xAI security pointers.
OpenAI and Google AI vs. Grok: How different AI firms method security and content material moderation

Credit score: Jonathan Raa/NurPhoto by way of Getty Photographs
Each OpenAI and Google have intensive documentation outlining their method to accountable AI use and prohibited content material. As an example, Google’s documentation particularly prohibits “Sexually Express” content material.
A Google security doc reads, “The appliance won’t generate content material that comprises references to sexual acts or different lewd content material (e.g., sexually graphic descriptions, content material geared toward inflicting arousal).” Google additionally has insurance policies towards hate speech, harassment, and malicious content material, and its Generative AI Prohibited Use Coverage prohibits utilizing AI instruments in a approach that “Facilitates non-consensual intimate imagery.”
OpenAI additionally takes a proactive method to deepfakes and sexual content material.
An OpenAI weblog put up asserting Sora describes the steps the AI firm took to stop this sort of abuse. “Immediately, we’re blocking notably damaging types of abuse, equivalent to baby sexual abuse supplies and sexual deepfakes.” A footnote related to that assertion reads, “Our prime precedence is stopping particularly damaging types of abuse, like baby sexual abuse materials (CSAM) and sexual deepfakes, by blocking their creation, filtering and monitoring uploads, utilizing superior detection instruments, and submitting experiences to the Nationwide Middle for Lacking & Exploited Kids (NCMEC) when CSAM or baby endangerment is recognized.”
That measured method contrasts sharply with the methods Musk promoted Grok Think about on X, the place he shared a brief video portrait of a blonde, busty, blue-eyed angel in barely-there lingerie.
This Tweet is at the moment unavailable. It could be loading or has been eliminated.
OpenAI additionally takes easy steps to cease deepfakes, equivalent to denying prompts for photographs and movies that point out public figures by identify. And in Mashable’s testing, Google’s AI video instruments are particularly delicate to pictures that may embody an individual’s likeness.
Compared to these prolonged security frameworks (which many specialists nonetheless imagine are insufficient), the xAI Acceptable Use Coverage is lower than 350 phrases. The coverage places the onus of stopping deepfakes on the consumer. The coverage reads, “You’re free to make use of our Service as you see match as long as you employ it to be a very good human, act safely and responsibly, adjust to the legislation, don’t hurt folks, and respect our guardrails.”
For now, legal guidelines and laws towards AI deepfakes and NCII stay of their infancy.
President Donald Trump lately signed the Take It Down Act, which incorporates protections towards deepfakes. Nonetheless, that legislation does not criminalize the creation of deepfakes however slightly the distribution of those photographs.
“Right here within the U.S., the Take it Down Act locations necessities on social media platforms to take away [Non-Consensual Intimate Images] as soon as notified,” Farid stated to Mashable. “Whereas this doesn’t immediately deal with the era of NCII, it does — in idea — deal with the distribution of this materials. There are a number of state legal guidelines that ban the creation of NCII however enforcement seems to be spotty proper now.”‘
Disclosure: Ziff Davis, Mashable’s guardian firm, in April filed a lawsuit towards OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI programs.