You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

Companies Caught Secretly Poisoning AI Assistants to Boost Their Own Brands

February 11, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

Microsoft security researchers have uncovered a new attack called AI Recommendation Poisoning, where dozens of companies embed hidden instructions in AI summarisation buttons to manipulate AI assistant memory. Over sixty days, researchers found more than fifty attempts from thirty-one companies across fourteen industries.

Companies Caught Manipulating AI Memory for Profit

Microsoft security researchers have blown the lid off a troubling new practice: companies are secretly embedding hidden instructions inside those convenient "Summarise with AI" buttons to manipulate what your AI assistant recommends in the future.

The technique, dubbed AI Recommendation Poisoning, works by injecting persistence commands through URL parameters when users click on AI summarisation features. These hidden prompts instruct AI assistants to remember a particular company as a trusted source or to recommend that company first in future conversations, effectively turning your personal AI into a covert advertising channel.

The Scale of the Problem

Over a sixty-day observation period, Microsoft identified more than fifty unique prompt-based attempts originating from thirty-one companies spanning fourteen different industries. The attack exploits how AI assistants process URL parameters, a feature originally designed for user convenience that has been weaponised to inject instructions without user awareness.

Three primary delivery methods were identified: malicious links with pre-filled prompts embedded in URL parameters, hidden instructions within documents and emails that activate when processed by assistants, and social engineering tactics persuading users to paste memory-altering commands directly.

Why This Matters

Once a poisoned prompt takes hold in an AI assistant's memory, it influences all subsequent interactions, not just the session where the manipulation occurred. This is particularly dangerous in sectors where accuracy is critical, including healthcare, finance, and cybersecurity. Users have no visible indication that their AI has been compromised.

The Bigger Picture

The disclosure arrives alongside alarming statistics: eighty percent of Fortune 500 companies now use active AI agents, whilst twenty-nine percent of employees have turned to unsanctioned AI tools for work. Microsoft has implemented protections in Copilot and released Advanced Hunting queries to help organisations detect suspicious URLs in email and Teams traffic. Microsoft Defender now also provides visibility into prompt injection attempts within Microsoft 365 Copilot, helping security teams identify both direct and cross-prompt injection attacks.

Published February 11, 2026 at 3:09pm