Lessons for defenders and platform providers: Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly harder.In response, Microsoft and OpenAI acted to disable the attacker-linked accounts and keys. The companies also urged defenders to inspect logs for outbound requests to unexpected domains such as api.openai.com, particularly from developer machines. Enabling tamper protection, real-time monitoring, and block mode in Defender, Microsoft said, can help detect lateral movements and the injection patterns used by SesameOp.”Microsoft Defender Antivirus detects this threat as ‘Trojan:MSIL/Sesameop.A (loader)’ and ‘Backdoor: MSIL/Sesameop.A(backdoor),” researchers added. Attackers continue finding inventive ways to weaponize AI. Recent disclosures have shown autonomous AI agents deployed to automate entire attack chains, generative AI used to accelerate ransomware campaigns, and prompt-injection techniques to weaponize coding assistants.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4083999/new-backdoor-sesameop-abuses-openai-assistants-api-for-stealthy-c2-operations.html
![]()

