Even after a fix was issued, lingering prompt injection risks in GitLab’s AI assistant might allow attackers to indirectly deliver developers malware, dirty links, and more.
First seen on darkreading.com
Jump to article: www.darkreading.com/application-security/gitlab-ai-assistant-opened-devs-to-code-theft
![]()

