URL has been copied successfully!
AI browsers can be tricked with malicious prompts hidden in URL fragments
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

AI browsers can be tricked with malicious prompts hidden in URL fragments

Tricking users into clicking poisoned links: HashJack is essentially a social engineering attack because it relies on tricking users to click on specially crafted URLs inside emails, chats, websites, or documents. However, this attack can be highly credible because it points to legitimate websites.For example, imagine a spoofed email that claims to be from a bank advising customers about suspicious activity in their accounts. Hovering over the link included in the email shows that it points to the bank’s real website, HTTPS and everything, but it’s a long link and somewhere in it there’s the # character followed by a prompt for the AI assistant.Many users are likely to trust such a message since it points to the real bank’s website and because long links with a lot of parameters and paths in them are not unusual. But the prompt that follows the # symbol will cause the AI browser assistant to provide attacker-altered instructions to the user, such as calling an attacker-controlled phone number or WhatsApp link for further customer support about the supposed situation.In another scenario, a prompt included in the link can tell an AI browser that acts like an agent,  for example, Perplexity’s Comet,  to take information about the user’s account, transaction history, phone number, and so on from the opened bank site and append it as parameters in a request to the attacker’s server.Other attacks could involve the prompt causing the AI assistant to display fake information that would mislead the user: fake investment advice promoting a certain stock, fabricated news, dangerous medical advice like wrong doses for medicine, malicious instructions that could open a backdoor on the computer, instructions to re-authenticate that include a link to a phishing site, a link to download malware, and so on.URL fragments cannot modify page content. They are only used for in-page navigation using the code that’s already there, so they are normally harmless. However, it now turns out that they can be used to modify the output of in-browser AI assistants or agentic browsers, which gives them an entirely new risk profile.”This discovery is especially dangerous because it weaponizes legitimate websites through their URLs,” the researchers said. “Users see a trusted site, trust their AI browser, and in turn trust the AI assistant’s output-making the likelihood of success far higher than with traditional phishing.”Different behavior across AI assistantsThe impact was different between the tested AI assistants and across the various scenarios. For example, while prompt injections managed to influence the text output on all the products tested, injecting malicious links proved harder on Gemini Assistant for Chrome, where some links were rewritten as search URLs, and on Edge with Microsoft Copilot, which prompted for additional confirmation when clicking on links in messages.Perplexity’s Comet, which is an agentic browser that does more than a built-in AI assistant, was the most susceptible one because it also could fetch attacker URLs in the background, with context information attached as parameters.Microsoft and Perplexity deployed fixes, but Google did not consider the HashJack technique a vulnerability because it views this as part of intended behavior. It’s worth noting that Cato also tested Claude for Chrome and OpenAI’s Operator browser, but the HashJack technique didn’t work on them.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4097087/ai-browsers-can-be-tricked-with-malicious-prompts-hidden-in-url-fragments.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link