Pushing falsehoods: A factor driving the recent popularity of recommendation poisoning appears to be the availability of open-source tools that make it easy to hide this function behind website Summarize buttons.This raises the uncomfortable possibility that poisoned buttons aren’t being added as an afterthought by SEO developers who get carried away. More likely, the intention from the start is to contaminate users’ AIs as a form of self-serving marketing.In Microsoft’s view, the dangers go beyond over-zealous marketing, and could just as easily be used to push falsehoods, dangerous advice, biased news sources, or commercial disinformation. What’s certain is that if legitimate companies are abusing the feature, cybercriminals won’t be shy about using it too.The good news is that the technique is relatively easy to spot and block, even if you don’t use Microsoft’s Microsoft 365 Copilot or Azure AI services, which the company says contain integrated protections.For individual users, this involves studying the saved information a chatbot has accumulated (how this is accessed varies by AI). For enterprise admins, in contrast, Microsoft recommends checking for URLs containing phrases such as ‘remember,’ ‘trusted source,’ ‘in future conversations,’ ‘authoritative source,’ and ‘cite or citation.’ None of this should be surprising. Once, URLs and file attachments were seen as convenient rather than inherently risky. AI is simply following the same path that every new technology must endure as it moves into the mainstream and becomes a target for misuse.As with other new technologies, users should educate themselves on the dangers posed by AI. “Avoid clicking AI links from untrusted sources: Treat AI assistant links with the same caution as executable downloads,” Microsoft recommended.This article originally appeared on CIO.com.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4131078/companies-are-using-summarize-with-ai-to-manipulate-enterprise-chatbots-3.html
![]()

