Podcast Episode
Grok AI Faces Global Backlash Over Nonconsensual Deepfake Images
January 18, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
Elon Musk's artificial intelligence chatbot Grok is at the centre of an international controversy after multiple investigations revealed the platform continues to generate and distribute sexualized deepfake images of real people despite announced safeguards. The crisis has prompted government bans, regulatory investigations, and calls for urgent action to protect victims of nonconsensual AI generated imagery.
Testing conducted by journalists in mid January 2026 demonstrated that Grok's standalone application continues to allow users to transform clothed photographs of real women into revealing images using simple text prompts. Content created through the app can be uploaded to X and remains publicly visible for extended periods without moderation or warning labels. The findings expose what critics describe as a fundamental disconnect between xAI's stated policies and its actual enforcement capabilities.
Women targeted by Grok users report experiencing the violations as a form of digital assault. One victim stated she has had at least 100 sexualized images of herself generated through the platform. Another woman described feeling humiliated after discovering alterations made to her profile picture. The lack of effective content moderation has left victims with minimal recourse, as automated detection systems struggle to identify nonconsensual modifications of real people.
These measures have proven inadequate. While certain capabilities were limited on the main X platform, the standalone Grok Imagine application remains operational and allows users to create and distribute explicit content without equivalent restrictions. Testing conducted just days after the announced safeguards demonstrated continued functionality of the image manipulation features.
The United Kingdom has taken formal regulatory action. Prime Minister Keir Starmer called the exploitation of Grok absolutely disgusting and shameful, warning that if X cannot control Grok, we will. Communications regulator Ofcom launched a formal investigation to determine whether X violated legal obligations under the Online Safety Act. If violations are confirmed, potential penalties include fines of up to 10 percent of global revenue or 18 million pounds, whichever is greater.
The investigation tests the enforcement of AB 621, California's deepfake pornography law that took effect on January 1, 2026. The legislation specifically addresses the creation and distribution of nonconsensual sexually explicit synthetic media. Bonta cited numerous examples of xAI allowing users to take ordinary clothed images of women and children and depict them in sexually explicit scenarios, all without subject consent.
The Attorney General acknowledged that xAI had taken some steps to address concerns in recent days, including geographic blocking and subscriber restrictions, but stated that the impact and effectiveness of those changes remain unclear.
In the United States, three Democratic senators called on Apple and Google to remove the X and Grok applications from their app stores until meaningful changes prevent the easy creation of nonconsensual explicit imagery.
The company has not provided detailed explanations of how its technological safeguards function or why the standalone Grok Imagine application continues to operate without equivalent protections to those announced for the main platform.
Experts note that by the time nonconsensual imagery is detected and removed, significant harm has already occurred. Images have been viewed, shared, and potentially archived by third parties. The permanent nature of digital content means victims may continue discovering new instances of their likeness being exploited long after initial creation.
The crisis has intensified calls for proactive technical measures that prevent harmful content generation rather than relying on post creation detection and removal. Regulatory authorities are examining whether existing legal frameworks provide adequate tools to address AI generated nonconsensual intimate imagery or whether new legislation is required.
The Scale of the Problem
Between Christmas and New Year's Day 2025, analysis of Grok generated imagery revealed that more than half of approximately 20,000 images depicted people in minimal clothing. Some of those images appeared to feature children. The AI tool has been used to create sexualized content from ordinary photographs found online, including profile pictures and family photos, all without the knowledge or consent of the individuals depicted.Women targeted by Grok users report experiencing the violations as a form of digital assault. One victim stated she has had at least 100 sexualized images of herself generated through the platform. Another woman described feeling humiliated after discovering alterations made to her profile picture. The lack of effective content moderation has left victims with minimal recourse, as automated detection systems struggle to identify nonconsensual modifications of real people.
Ineffective Safeguards
On January 14, 2026, xAI announced it had implemented technological safeguards to prevent Grok from modifying images of real people into revealing clothing. The company also introduced geographic blocking in regions where such content is illegal and restricted image generation features to paid subscribers.These measures have proven inadequate. While certain capabilities were limited on the main X platform, the standalone Grok Imagine application remains operational and allows users to create and distribute explicit content without equivalent restrictions. Testing conducted just days after the announced safeguards demonstrated continued functionality of the image manipulation features.
Government Response
Indonesia became the first nation to block Grok on January 10, 2026, citing the need to safeguard women, children, and the public from dangers posed by AI generated pornographic material. Malaysia implemented its own block the following day. Both countries have stated that access will remain restricted until effective safeguards are properly implemented.The United Kingdom has taken formal regulatory action. Prime Minister Keir Starmer called the exploitation of Grok absolutely disgusting and shameful, warning that if X cannot control Grok, we will. Communications regulator Ofcom launched a formal investigation to determine whether X violated legal obligations under the Online Safety Act. If violations are confirmed, potential penalties include fines of up to 10 percent of global revenue or 18 million pounds, whichever is greater.
California Investigation
California Attorney General Rob Bonta announced a comprehensive investigation into xAI on January 15, 2026, examining what his office describes as the large scale production of deepfake nonconsensual intimate images used to harass women and girls. On January 17, Bonta issued a cease and desist letter demanding xAI immediately halt the creation and distribution of such imagery.The investigation tests the enforcement of AB 621, California's deepfake pornography law that took effect on January 1, 2026. The legislation specifically addresses the creation and distribution of nonconsensual sexually explicit synthetic media. Bonta cited numerous examples of xAI allowing users to take ordinary clothed images of women and children and depict them in sexually explicit scenarios, all without subject consent.
The Attorney General acknowledged that xAI had taken some steps to address concerns in recent days, including geographic blocking and subscriber restrictions, but stated that the impact and effectiveness of those changes remain unclear.
International Regulatory Action
The European Commission ordered X on January 8 to retain all internal documents and data related to Grok until the end of 2026 as part of ongoing regulatory scrutiny. India and France have initiated their own inquiries into the platform's handling of AI generated content.In the United States, three Democratic senators called on Apple and Google to remove the X and Grok applications from their app stores until meaningful changes prevent the easy creation of nonconsensual explicit imagery.
Company Response
Elon Musk has largely dismissed the concerns, accusing governments of attempting to suppress free speech and responding to criticism with emojis on social media. X has maintained that users creating illegal content through Grok will face consequences, but the platform's reactive moderation approach has failed to prevent widespread circulation of harmful imagery.The company has not provided detailed explanations of how its technological safeguards function or why the standalone Grok Imagine application continues to operate without equivalent protections to those announced for the main platform.
Implications for AI Regulation
The Grok controversy represents a critical test case for AI regulation worldwide. It demonstrates how rapidly AI tools can be weaponized to create harmful content, highlights the inadequacy of voluntary industry safeguards, and exposes the limitations of reactive content moderation systems.Experts note that by the time nonconsensual imagery is detected and removed, significant harm has already occurred. Images have been viewed, shared, and potentially archived by third parties. The permanent nature of digital content means victims may continue discovering new instances of their likeness being exploited long after initial creation.
The crisis has intensified calls for proactive technical measures that prevent harmful content generation rather than relying on post creation detection and removal. Regulatory authorities are examining whether existing legal frameworks provide adequate tools to address AI generated nonconsensual intimate imagery or whether new legislation is required.
Published January 18, 2026 at 8:39pm