URL has been copied successfully!
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent,” NeuralTrust said in a report published Friday

First seen on thehackernews.com

Jump to article: thehackernews.com/2025/10/chatgpt-atlas-browser-can-be-tricked-by.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link