Nation-state groups used Gemini to accelerate attack operations: Google sees itself not just as a potential victim of AI cybercrime, but also an unwilling enabler. Its report documented how government-backed threat actors from China, Iran, North Korea, and Russia integrated Gemini into their operations in late 2025. The company said it disabled accounts and assets associated with these groups.Iranian threat actor APT42 used Gemini to craft targeted social engineering campaigns, feeding the AI biographical details about specific targets to generate conversation starters designed to build trust, according to the report. The group also used Gemini for translation and to understand cultural references in non-native languages.Chinese groups APT31 and UNC795 used Gemini to automate vulnerability analysis, debug malicious code, and research exploitation techniques, the report found. North Korean hackers from UNC2970 mined Gemini for intelligence on defense contractors and cybersecurity firms, collecting details on organizational structures and job roles to support phishing campaigns.Google said it took action by disabling associated accounts and that Google DeepMind used the insights to strengthen defenses against misuse.
Attackers integrate AI into malware operations: Gemini is being misused in other ways too, Google said, with some bad actors embedding its APIs directly into malicious code.Google identified a new malware family it called HONESTCUE that integrates Gemini’s API directly into its operations, sending prompts to generate working code that the malware compiles and executes in memory. The prompts appear benign in isolation, allowing them to bypass Gemini’s safety filters, according to the report.AttackIQ field CISO Pete Luban sees services like Gemini as an easy way for hackers to up their game. “Integration of public AI models like Google Gemini into malware grants threat actors instant access to powerful LLM capabilities without needing to build or train anything themselves,” he said. “Malware capabilities have advanced exponentially, allowing for faster lateral movement, stealthier attack campaigns, and more convincing mimicry of typical company operations.”Google also documented COINBAIT, a phishing kit built using AI code generation platforms, and Xanthorox, an underground service that advertised custom malware-generating AI but was actually a wrapper around commercial products including Gemini. The company shut down accounts and projects connected to both.Luban said the pace of AI-enabled threats means traditional defenses are insufficient. “Continuous testing against realistic adversary behavior is essential to determining if security defenses are prepared to combat adaptive threats,” he said.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4132098/google-fears-massive-attempt-to-clone-gemini-ai-through-model-extraction.html
![]()

