You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

X Disables Grok Image Generation After International Deepfake Crisis

January 19, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

Elon Musk's artificial intelligence chatbot Grok has become the centre of an international controversy after widespread abuse of its image generation feature led to government bans, criminal investigations, and urgent calls for stronger AI safety regulations. The platform has now implemented significant restrictions on its image generation capabilities after initially facing criticism for inadequate responses to the crisis.

The crisis emerged in early January 2026 when users discovered that Grok Imagine, the AI's image generation tool, had virtually no safety guardrails preventing the creation of deepfake images. Unlike competing AI image generators from companies like OpenAI and Google, which block the generation of images depicting real people, Grok allowed users to create sexualized images of real individuals and could be prompted to digitally remove clothing from photos of clothed people.

Scale of the Problem

The extent of the abuse was substantial. According to reports, approximately 6,700 deepfake images were being generated on the platform every hour at the crisis's peak, totaling over 160,000 such images per day. The victims included not only public figures and celebrities but also private individuals, women, and children who found themselves depicted in revealing clothing or compromising situations without any consent.

The ease with which users could create these images raised serious questions about AI safety and the responsibility of technology companies to implement proactive safeguards. Other major AI image generation tools had established restrictions against creating images of real people from their initial release, demonstrating that the technical capability to prevent such abuse existed.

Global Response and Government Action

The international response to the Grok deepfake crisis was swift and coordinated. Indonesia became the first country to temporarily block access to Grok on January 11, 2026, followed immediately by Malaysia on January 12. These government actions represented an unprecedented move to ban a specific AI tool due to safety concerns.

Beyond these outright bans, formal investigations were launched by regulatory authorities in multiple jurisdictions. India, Ireland, the United Kingdom, France, and Australia all announced probes into Grok's image generation capabilities and the platform's apparent failure to prevent abuse. The European Commission also initiated its own investigation, examining whether X had violated digital safety regulations.

In the United States, California Attorney General Rob Bonta announced that his office was investigating xAI over what he characterized as the large scale production of deepfake nonconsensual intimate images. The investigation focused on whether the company had failed in its duty to prevent the creation and distribution of harmful content.

Irish authorities revealed they were dealing with approximately 200 active criminal investigations linked to child sexual abuse material generated using Grok and similar AI applications. This disclosure underscored the severity of the harm caused by the unrestricted image generation feature and the real world criminal activity it enabled.

Evolution of X's Response

X's initial response to the crisis drew immediate criticism for being inadequate. On January 9, 2026, the company announced it would restrict image generation and editing to paying subscribers only. Critics, including lawmakers and victims' advocates, condemned this approach as insulting and not effective, arguing that placing the feature behind a paywall did nothing to prevent the creation of harmful content and instead merely monetized access to a tool being used for abuse.

Following sustained pressure from regulators, law enforcement, and civil society organizations, X implemented more substantial restrictions. By mid January, the company announced it would disable the ability to edit images of real people in revealing clothing for all users, including paid subscribers. Additionally, X stated it would implement geoblocking measures to prevent users in jurisdictions where such content is illegal from generating images of real people in bikinis, underwear, and similar attire.

Implementation Challenges

Despite these announcements, questions remained about the effectiveness of the new restrictions. In the hours following the implementation of stricter measures, some European users reported still being able to access the controversial function under certain circumstances. More significantly, the standalone Grok application continued to offer the image generation feature with full functionality, even as access through X's main platform was being restricted.

Irish Minister of State for Artificial Intelligence Niamh Smyth met with X representatives on January 17 and stated that she continued to have concerns about the platform's safeguards despite the newly announced restrictions. Her comments reflected broader skepticism about whether the measures would prove sufficient to prevent future abuse.

Broader Implications for AI Safety

The Grok crisis has reignited debates about AI safety governance and the responsibilities of technology companies developing and deploying powerful generative AI tools. Critics argued that the incident demonstrated the dangers of prioritizing unrestricted functionality over user safety and the protection of potential victims.

The controversy also highlighted the challenges of enforcing safety measures in a rapidly evolving technological landscape. While AI companies can deploy new features at remarkable speed, regulatory frameworks and law enforcement mechanisms often struggle to keep pace. Once harmful deepfake images are created and distributed, the damage cannot be undone, raising questions about whether reactive measures can ever be sufficient.

The fact that other AI image generation platforms had implemented restrictions against creating images of real people from their initial release suggested that Grok's lack of safeguards represented a deliberate choice rather than a technical limitation. This choice, critics argued, prioritized an absolutist interpretation of free expression over the prevention of demonstrable harm to real individuals.

As investigations continue across multiple jurisdictions and affected individuals pursue legal remedies, the Grok deepfake crisis may serve as a watershed moment in the ongoing development of AI safety standards and regulatory frameworks for generative artificial intelligence technologies.

Published January 19, 2026 at 12:40am

More Recent Episodes