You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

AI Chatbots Now Citing Musk's Grokipedia, Raising Misinformation Alarms

January 31, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

Major AI chatbots including ChatGPT, Gemini, and Claude are increasingly citing Elon Musk's AI-generated encyclopedia Grokipedia as a source. Testing reveals Grokipedia appeared in over 263,000 ChatGPT responses, sparking concerns about misinformation spreading across AI platforms as experts warn the AI-written reference is prone to factual errors and ideological bias.

AI Chatbots Embrace Controversial AI Encyclopedia

A troubling trend is emerging across major artificial intelligence platforms: chatbots from OpenAI, Google, Microsoft, and Anthropic are increasingly pulling information from Grokipedia, an AI-generated encyclopedia launched by Elon Musk's xAI in late October 2025.

New research from SEO firm Ahrefs reveals the scale of this development. Testing across 13.6 million prompts found Grokipedia appearing in more than 263,000 ChatGPT responses, citing approximately 95,000 individual pages from the platform. While this remains dwarfed by Wikipedia's nearly 2.9 million appearances in the same testing period, the trajectory is concerning to researchers.

Accuracy Under Scrutiny

Unlike Wikipedia, which relies on human editors and established verification processes, Grokipedia is generated and maintained almost entirely by Grok, xAI's chatbot, with minimal human oversight. Independent reviews by PolitiFact and Wired have documented significant problems with the platform, including factual errors, fabricated citations, and content that appears ideologically skewed.

Researchers have flagged particularly troubling entries containing misleading claims about historical events and social issues. Taha Yasseri, chair of technology and society at Trinity College Dublin, warns that fluent-sounding text is easily mistaken for reliable information.

The Feedback Loop Problem

The concern extends beyond a single platform. Testing by The Guardian found ChatGPT's GPT-5.2 model cited Grokipedia nine times across various queries, while Anthropic's Claude has also referenced the platform on topics ranging from oil production to Scottish beers. This creates a potential feedback loop where AI-generated misinformation from one system influences others.

Platform Responses Diverge

OpenAI has stated that ChatGPT aims to draw from diverse publicly available sources while applying safety filters. When approached for comment, xAI offered a characteristically terse response: "Legacy media lies."

Published January 31, 2026 at 7:56pm

More Recent Episodes