You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

Southeast Asian Nations Ban Grok AI Over Deepfake Crisis

January 20, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

In an unprecedented global response to AI safety concerns, three Southeast Asian countries have banned Elon Musk's artificial intelligence chatbot Grok within a two week period, marking the first time any nation has taken formal action to block an AI application over non-consensual deepfake generation. Indonesia, Malaysia, and the Philippines have all restricted access to the tool as regulators worldwide intensify scrutiny over its capacity to generate sexualized imagery of women and children without consent.

The Bans Unfold

Indonesia initiated the crackdown on January 10, 2026, imposing a temporary ban to protect women, children, and the entire community from the risk of fake pornographic content generated using artificial intelligence technology. Minister Meutya Hafid stated that the government views non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.

Malaysia followed on January 11, with the Communications and Multimedia Commission citing repeated misuse of Grok to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images. The commission noted that notices sent to X Corporation and xAI earlier in January demanding stronger safeguards received responses insufficient to prevent harm or ensure legal compliance.

The Philippines announced its ban on January 15, with telecommunications secretary Henry Rhoel Aguda declaring at a press briefing that the country needed to clean the internet because much toxic content was appearing, especially with the advent of AI. The National Telecommunications Commission ordered local telecommunications companies to block access to Grok within 24 hours.

Scale of the Problem

Before restrictions were implemented, the scale of abuse was staggering. Researcher Genevieve Oh found that the chatbot was producing roughly 6,700 sexually suggestive or nudifying images per hour. In Indonesia, concerns were amplified after reports emerged that the AI tool had been used to create sexualized deepfakes of women including members of the girl group JKT48.

The technology allowed users to upload photographs of real people and use Grok's image editing capabilities to remove clothing or place individuals in revealing situations. This function was accessible to anyone using the platform, creating widespread potential for abuse.

X's Response and Criticism

Amid mounting pressure, X announced on January 14 that it had implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis, with the restriction applying to all users, including paid subscribers. The company also stated it would geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire in jurisdictions where such content is illegal.

However, the response was heavily criticized as inadequate. British Prime Minister Keir Starmer's office called the move to limit access to paying subscribers insulting to victims and not a solution. Critics noted that while the company claimed it had stopped Grok from undressing people on the X platform, the standalone Grok app still retained this capability.

Legal Action

The controversy reached the courts when Ashley St. Clair, the mother of one of Elon Musk's children, filed a lawsuit against xAI on January 15. The lawsuit alleges that Grok generated explicit deepfake images of her, including one derived from a photograph taken when she was 14 years old. According to the complaint, images were created showing her stripped and placed in a string bikini covered with swastikas.

St. Clair claimed she had notified xAI that users were creating illicit deepfake photos of her as a child and as an adult in sexually explicit poses, requesting that the Grok service be prevented from creating the non-consensual images. The lawsuit alleges that even though Grok confirmed her images would not be used or altered without explicit consent, xAI continued to allow users to create more explicit AI generated images of her.

In response, xAI sued St. Clair in Texas for 75,000 dollars on the following day, arguing that by agreeing to xAI's terms of service, she is obligated to bring any disputes exclusively to the courts in Tarrant County, Texas.

Global Regulatory Response

Singapore's Infocomm Media Development Authority announced on January 20 that it is engaging X on the generation and distribution of non-consensual intimate images, noting that the platform has taken measures to address the issue. The UK's Ofcom launched a formal investigation into X on January 12 to determine whether it failed in its duty to protect users from illegal content.

California, where xAI is headquartered, has begun a probe into Grok over sexualized AI images. The European Commission ordered X to retain all internal documents and data related to Grok until the end of 2026 in response to the uproar. Irish regulators and Indian regulators have also sought information about Grok related safety issues.

Enforcement Challenges

Despite the restrictions implemented by Southeast Asian nations, authorities acknowledge that users in Malaysia, Indonesia, and the Philippines have been able to bypass bans using VPN services. This highlights the enforcement challenges facing regulators attempting to control access to AI tools in an interconnected digital environment.

The crisis has exposed fundamental gaps in AI oversight and enforcement, raising urgent questions about what safeguards need to be in place before powerful generative AI tools are released to the public. As governments worldwide grapple with how to regulate these emerging technologies, the Grok controversy serves as a stark reminder of the real world harm that can result when AI safety measures prove inadequate.

Published January 20, 2026 at 7:18pm