Custom face generation for dating scamsAudio spoofing for voice verification fraudOn-demand video avatars that lip-sync based on customer-submitted scriptsThese services are increasingly offered with add-ons such as pre-loaded backstories,matching fake documents, and automated scheduling for calls.
Prompt engineering as a service: Underground communities have also emerged around the art of crafting jailbreak prompts.These “bypass builders” specialize in defeating guardrails of mainstream LLMs (e.g., ChatGPT or Gemini) to unlock restricted outputs such as social engineering scripts, step-by-step hacking tutorials, and bank fraud playbooks, including “know your customer” (KYC) bypass guides.”This ‘prompt engineering as a service’ (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,” Gray warns.”Together, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,” he says.
Deep dive: Flashpoint analysts tracked these developments in real-time across more than 100,000 illicit sources, monitoring everything from dark web marketplaces and Telegram groups tounderground LLM communities.Between Jan. 1 and May 30, 2025, the researchers logged more than 2.5 million AI-related posts covering various nefarious tactics, including jailbreak prompts, deepfake service ads, phishing toolkits, and bespoke language models built for fraud and other forms of cybercrime.
Underground LLM tactics and strategies: Related research from Cisco Talos warns that cybercriminals continue to adopt LLMs to streamline their processes, write tools and scripts that can be used to compromise users, and generate content that can more easily bypass defenses.Talos observed cybercriminals resorting to using uncensored LLMs or even custom-built criminal LLMs for illicit purposes.Advertised features of malicious LLMs suggest that cybercriminals are linking these systems to various external tools to scan sites for vulnerabilities, verify stolen credit card numbers, and other malicious actions.At the same time, adversaries are often jailbreaking legitimate models faster than LLM developers can secure them, Talos warns.
Defense against the dark (AI) arts: Flashpoint’s “AI and Threat Intelligence: The Defenders’ Guide” explains that while AI is a double-edged sword in cybersecurity, defenders who thoughtfully integrate AI into their threat intelligence and response workflows can outpace adversaries.Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape.”Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. “This philosophy ensures AI strengthens existing workflows, driving value by reducing noise and accelerating decision-making, rather than creating new blind spots.”Gray adds: “The organizing principle should enhance their collection advantage by utilizing AI to derive insights from high-signal data, accelerating discovery, and structuring unstructured content. Ultimately, the aim is to improve efficiency by empowering analysts with tools that assist their judgment, maintain human control, and provide context.”
Underground LLM tactics and strategies: Related research from Cisco Talos warns that cybercriminals continue to adopt LLMs to streamline their processes, write tools and scripts that can be used to compromise users, and generate content that can more easily bypass defenses.Talos observed cybercriminals resorting to using uncensored LLMs or even custom-built criminal LLMs for illicit purposes.Advertised features of malicious LLMs suggest that cybercriminals are linking these systems to various external tools to scan sites for vulnerabilities, verify stolen credit card numbers, and other malicious actions.At the same time, adversaries are often jailbreaking legitimate models faster than LLM developers can secure them, Talos warns.
Defense against the dark (AI) arts: Flashpoint’s “AI and Threat Intelligence: The Defenders’ Guide” explains that while AI is a double-edged sword in cybersecurity, defenders who thoughtfully integrate AI into their threat intelligence and response workflows can outpace adversaries.Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape.”Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. “This philosophy ensures AI strengthens existing workflows, driving value by reducing noise and accelerating decision-making, rather than creating new blind spots.”Gray adds: “The organizing principle should enhance their collection advantage by utilizing AI to derive insights from high-signal data, accelerating discovery, and structuring unstructured content. Ultimately, the aim is to improve efficiency by empowering analysts with tools that assist their judgment, maintain human control, and provide context.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4014238/cybercriminals-take-malicious-ai-to-the-next-level.html
![]()

