2. Machine-learning generative adversarial networks: Michel Sahyoun, chief solutions architect with cybersecurity technology firm NopalCyber, recommends using generative adversarial networks (GANs) to create, as well as protect against, highly sophisticated previously unseen cyberattacks. “This technique enables cybersecurity systems to learn and adapt by training against a very large number of simulated threats,” he says.GANs allow systems to learn from millions of novel attack scenarios and develop effective defenses, Sahyoun says. “By simulating attacks that haven’t yet occurred, adversarial AI helps proactively prepare for emerging threats, narrowing the gap between offensive innovation and defensive readiness.”A GAN consists of two core components: a generator and a discriminator. “The generator produces realistic cyberattack scenarios, such as novel malware variants, phishing emails, or network intrusion patterns, by mimicking real-world attacker tactics,” Sahyoun explains. The discriminator evaluates these scenarios, learning to distinguish malicious activity from legitimate behavior. Together, they form a dynamic feedback loop. “The generator refines its attack simulations based on the discriminator’s assessments, while the discriminator continuously improves its ability to detect increasingly sophisticated threats.”
3. An AI analyst assistant: By automating the labor-intensive process of threat triage, Hughes Network Systems is leveraging gen AI to elevate the role of the entry-level analyst.”Our AI engine actively monitors security alerts, correlates data from multiple sources, and generates contextual narratives that would otherwise require significant manual effort,” says Ajith Edakandi, cybersecurity product lead at Hughes Enterprise. “This approach positions the AI not as a replacement for human analysts, but as an intelligent assistant that performs much of the initial investigative groundwork.”Edakandi says the approach significantly improves the efficiency of security operations centers (SOCs) by allowing analysts to process alerts faster and with greater precision. “A single alert often triggers a cascade of follow-up actions, checking logs, cross-referencing threat intelligence, assessing business impact, and more,” he states. “Our AI streamlines this [process] by performing these steps in parallel and at machine speed, ultimately allowing human analysts to focus on validating and responding to threats rather than spending valuable time gathering context.”The AI engine is trained on established analyst playbooks and runbooks, learning the typical steps taken during various types of investigations, Edakandi says. “When an alert is received, AI initiates those same investigative actions [as humans], pulling data from trusted sources, correlating findings, and synthesizing the threat story.” The final output is an analyst-ready summary, effectively reducing investigation time from nearly an hour to just minutes. “It also enables analysts to handle a higher volume of alerts,” he notes.
4. AI models that detect micro-deviations: AI models can be used to baseline system behavior, detecting micro-deviations that humans or traditional rule- or threshold-based systems would miss, says Steve Tcherchian, CEO of security services and products firm XYPRO Technology. “Instead of chasing known bad behaviors, the AI continuously learns what ‘good’ looks like at the system, user, network, and process levels,” he explains. “It then flags anything that strays from that norm, even if it hasn’t been seen before.”Fed real-time data, process logs, authentication patterns, and network flows, the AI models are continuously trained on normal behavior as a means for detecting anomalous activity. “When something deviates, like a user logging in at an odd hour from a new location, a risk signal is triggered,” Tcherchian says. “Over time, the model gets smarter and increasingly precise as more and more of these signals are identified.”
5. Automated alert triage investigation and response: A 1,000-person company can easily get 200 alerts in a day, observes Kumar Saurabh, CEO of managed detection and response firm AirMDR. “To thoroughly investigate an alert, it takes a human analyst at best 20 minutes,” he says. This means you’ll need at least nine analysts to investigate every single alert. “Therefore, most alerts are ignored or not investigated thoroughly.”AI analyst technology examines each alert and then determines what other pieces of data it needs to gather to make an accurate decision on whether the alert is benign or serious. The AI analyst talks to other tools within the enterprise’s security stack to gather the data needed to reach a decision on whether the alert requires action or can be safely dismissed. “If it’s malicious, the technology figures out what actions need to be taken to remediate and/or recover from the threat and immediately notifies the security team,” Saurabh says.
6. Proactive generative deception: A truly novel approach to AI in cybersecurity is using proactive generative deception within a dynamic threat landscape, says Gyan Chawdhary, CEO of cybersecurity training firm Kontra.”Instead of just detecting threats, we can train AI to continuously create and deploy highly realistic, yet fake, network segments, data, and user behaviors,” he explains. “Think of it as building an ever-evolving digital funhouse for attackers.”Chawdhary adds that the approach goes beyond traditional honeypots by making the deception far more pervasive, intelligent, and adaptive, aiming to exhaust and confuse attackers before they can reach legitimate assets.This approach is incredibly useful because it completely shifts the power dynamic, Chawdhary says. “Instead of constantly reacting to new threats, we force attackers to react to our AI-generated illusions,” he says. “It significantly increases the cost and time for attackers, as they waste resources exploring decoy systems, exfiltrating fake data, and analyzing fabricated network traffic.” The technique not only buys valuable time for defenders but also provides a rich source of threat intelligence about attackers’ tactics, techniques, and procedures (TTPs) as they interact with the deceptive environment.On the downside, developing a proactive generative deception environment requires significant resources spanning several domains. “You’ll need a robust cloud-based infrastructure to host the dynamic decoy environments, powerful GPU resources for training and running the generative AI models, and a team of highly skilled AI/ML engineers, cybersecurity architects, and network specialists,” Chawdhary warns. “Additionally, access to diverse and extensive datasets of both benign and malicious network traffic is crucial to train the AI to generate truly convincing deceptions.”
5. Automated alert triage investigation and response: A 1,000-person company can easily get 200 alerts in a day, observes Kumar Saurabh, CEO of managed detection and response firm AirMDR. “To thoroughly investigate an alert, it takes a human analyst at best 20 minutes,” he says. This means you’ll need at least nine analysts to investigate every single alert. “Therefore, most alerts are ignored or not investigated thoroughly.”AI analyst technology examines each alert and then determines what other pieces of data it needs to gather to make an accurate decision on whether the alert is benign or serious. The AI analyst talks to other tools within the enterprise’s security stack to gather the data needed to reach a decision on whether the alert requires action or can be safely dismissed. “If it’s malicious, the technology figures out what actions need to be taken to remediate and/or recover from the threat and immediately notifies the security team,” Saurabh says.
6. Proactive generative deception: A truly novel approach to AI in cybersecurity is using proactive generative deception within a dynamic threat landscape, says Gyan Chawdhary, CEO of cybersecurity training firm Kontra.”Instead of just detecting threats, we can train AI to continuously create and deploy highly realistic, yet fake, network segments, data, and user behaviors,” he explains. “Think of it as building an ever-evolving digital funhouse for attackers.”Chawdhary adds that the approach goes beyond traditional honeypots by making the deception far more pervasive, intelligent, and adaptive, aiming to exhaust and confuse attackers before they can reach legitimate assets.This approach is incredibly useful because it completely shifts the power dynamic, Chawdhary says. “Instead of constantly reacting to new threats, we force attackers to react to our AI-generated illusions,” he says. “It significantly increases the cost and time for attackers, as they waste resources exploring decoy systems, exfiltrating fake data, and analyzing fabricated network traffic.” The technique not only buys valuable time for defenders but also provides a rich source of threat intelligence about attackers’ tactics, techniques, and procedures (TTPs) as they interact with the deceptive environment.On the downside, developing a proactive generative deception environment requires significant resources spanning several domains. “You’ll need a robust cloud-based infrastructure to host the dynamic decoy environments, powerful GPU resources for training and running the generative AI models, and a team of highly skilled AI/ML engineers, cybersecurity architects, and network specialists,” Chawdhary warns. “Additionally, access to diverse and extensive datasets of both benign and malicious network traffic is crucial to train the AI to generate truly convincing deceptions.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4059116/6-novel-ways-to-use-ai-in-cybersecurity.html
![]()

