Tactics of attackers: The OpenAI report, published in June, detailed a variety of defenses the company has deployed against fraudsters. One, for example, involved bogus job applications.”We identified and banned ChatGPT accounts associated with what appeared to be multiple suspected deceptive employment campaigns. These threat actors used OpenAI’s models to develop materials supporting what may be fraudulent attempts to apply for IT, software engineering, and other remote jobs around the world,” the report said. “Although we cannot determine the locations or nationalities of the threat actors, their behaviors were consistent with activity publicly attributed to IT worker schemes connected to North Korea (DPRK). Some of the actors linked to these recent campaigns may have been employed as contractors by the core group of potential DPRK-linked threat actors to perform application tasks and operate hardware, including within the US.”Another tactic involved a traditional cyberattack with malware.”We banned a cluster of ChatGPT accounts that appeared to be operated by a Russian-speaking threat actor. This actor used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure,” the report said. “The actor demonstrated knowledge of Windows internals and exhibited some operational security behaviors. Based on the operation’s focus on using a trojanized crosshair gaming tool and its stealthy tactics, we have dubbed it ScopeCreep.” Perhaps the most interesting part of the report dealt with some tweaks of fraud attacks that CISO teams can watch for.”This threat actor had a notable approach to operational security. They utilized temporary email addresses to sign up for ChatGPT accounts, limiting each ChatGPT account to one conversation about making one incremental improvement to their code. They then abandoned the original account and created a new one,” the report noted. “The actor distributed the ScopeCreep malware through a publicly available code repository that impersonated a legitimate and popular crosshair overlay tool (Crosshair-X) for video games.” The report said that unsuspecting users who downloaded and ran the malicious version would have additional malicious files downloaded from attacker infrastructure and executed. Then the malware would initiate a multi-stage process to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection. “The threat actor utilized our model to assist in developing the malware iteratively, by continually requesting ChatGPT to implement further specific features,” OpenAI said.Will Townsend, a VP and principal analyst with Moor Insights & Strategy, was more charitable than Gartner.”It clearly demonstrates the depth that OpenAI is taking to secure models and mitigate poisoning that can lead to hallucinations and GPU workload disruption,” Townsend said.
Detection ‘easy to sidestep’: However, Gartner’s Litan detailed several of her concerns about the OpenAI report that colored her opinion of it.”It is reactive and measures [attacks] after misuse is detected” such as after malware is created, Litan said. She also saw the proposed defense techniques as “resource-intense monitoring that relies on heavy-handed human resources for detection. Not scalable.”She also observed that OpenAI did the obvious, in that it “only focuses on OpenAI models and not other AI platforms or open source models.”Litan called the techniques that OpenAI described as relatively easy for attackers to sidestep. “There is a risk of attacker evasion [because] their reactive detection can’t keep up with fast evolving tactics,” she said.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4004641/is-attacker-laziness-enabled-by-genai-shortcuts-making-them-easier-to-catch.html
![]()

