You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

Global Backlash: X Admits Failures After Grok AI Deepfake Scandal

January 11, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

This podcast explores the international crisis surrounding X's Grok AI chatbot, which has been generating nonconsensual sexually explicit deepfake images of real people, including children. The episode examines how a feature rollout in late December twenty twenty five quickly spiralled into one of the most significant AI safety failures of recent years, prompting government investigations, country-level bans, and a coordinated regulatory response across multiple continents.

The discussion covers X's admission of content moderation failures on January eleventh twenty twenty six, following global outcry over Grok's ability to create sexual deepfakes. The podcast analyses the company's response, including blocking three thousand five hundred pieces of content and deleting over six hundred accounts, whilst examining why regulators found these measures inadequate. Listeners will understand the technical vulnerabilities that allowed users to exploit the AI tool, the staggering scale of abuse documented by researchers, and the swift international regulatory response that included complete bans in Indonesia and Malaysia.

This episode is designed for tech-savvy adults who want to understand the broader implications of AI safety failures, content moderation challenges, and the emerging legal frameworks being deployed to address nonconsensual deepfake technology. The podcast provides crucial context about how AI image generation tools can be weaponised and what safeguards should have been in place.

Key Aspects Covered:
- How Grok's image editing feature was exploited to create nonconsensual sexual content
- The scale of the problem: nearly eight thousand sexualised images generated per hour
- Indonesia and Malaysia becoming the first countries to completely block Grok
- Regulatory responses from the European Union, United Kingdom, India, and France
- X's admission of content moderation failures and actions taken
- Criticism that restricting the feature to paid subscribers was inadequate
- The upcoming US Take It Down Act requiring forty eight hour removal of flagged content
- Civil society organisations withdrawing from X over the scandal
- Broader implications for AI safety, platform accountability, and digital rights

Published January 11, 2026 at 6:34pm

More Recent Episodes