Podcast Episode
Microsoft Uncovers Corporate AI Memory Poisoning Campaign
February 11, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
Microsoft security researchers have exposed a new attack called AI Recommendation Poisoning, where over 30 companies across 14 industries have been embedding hidden instructions in AI summarisation buttons to manipulate AI assistant memory and bias future recommendations toward their products.
A New Kind of Digital Manipulation
Microsoft's security team has blown the whistle on a sneaky new attack technique targeting AI assistants. Called AI Recommendation Poisoning, the method involves companies embedding hidden instructions inside those convenient "Summarise with AI" buttons found across the web. When users click these buttons, hidden commands are injected into their AI assistant's memory, instructing it to treat certain companies as trusted sources or recommend them first in future interactions.The Scale of the Problem
Over a 60-day observation period, Microsoft identified more than 50 unique prompt-based manipulation attempts originating from 31 different companies spanning 14 industries. The technique exploits how AI assistants process URL parameters, a feature designed for user convenience that has been weaponised to inject instructions without user awareness. Turnkey tools designed specifically for this purpose are already becoming widely available.Why It Matters
The implications extend far beyond annoying product placements. Compromised AI assistants can provide subtly biased recommendations on critical topics including healthcare, finance, and cybersecurity, all without users realising their AI has been tampered with. Once poisoned, the AI treats these injected instructions as legitimate user preferences, influencing every subsequent interaction.Defences and Detection
Microsoft has implemented protections in Copilot against these prompt injection attacks and released Advanced Hunting queries to help organisations detect suspicious URLs in email and Teams traffic. The disclosure arrives alongside a broader Microsoft Cyber Pulse report revealing that 80 percent of Fortune 500 companies now use active AI agents, while 29 percent of employees have turned to unsanctioned AI tools, creating significant security blind spots.Part of a Growing Threat Landscape
The research connects to wider AI security concerns, including the recently discovered HashJack technique where malicious prompts hidden in URL fragments can manipulate AI browser assistants. Microsoft and Perplexity have patched that vulnerability, though Google did not consider it a security issue. Microsoft Defender now provides visibility into prompt injection attempts within Microsoft 365 Copilot, helping security teams detect both user-initiated and external data source attacks.Published February 11, 2026 at 6:32pm