Build capabilities for AI governance, content and quality: Upskilling existing analysts alone is not enough. As AI agents begin operating across tools, making decisions and triggering actions with minimal human involvement, the demands on the SOC will extend well beyond traditional analyst capabilities, experts say.Content engineering, for instance, is one emerging requirement. In an AI-enabled SOC, detection engineers will no longer write only static rules. They must design dynamic content such as questions, prompts and investigation templates that agents can use to reason, enrich data, correlate signals and act autonomously. These content engineers curate the structured inputs that power agents, including telemetry, threat models, and playbooks.”This is the most underappreciated role in AI-powered security operations,” Yoran notes. “These are people who build and maintain the questions that agents can ask, the investigation plans that guide autonomous work, and the knowledge bases that provide context,”. Organizations need someone who can translate detection logic from their SIEM, import best practices from frameworks like MITRE ATT&CK, and encode institutional knowledge into the platform. “This isn’t traditional security engineering, it’s closer to knowledge management combined with threat intelligence,” he says.Mature SOCs will also require clear ownership of AI governance and agent oversight. That includes roles that have oversight over model risk evaluation, prompt and policy management, continuous performance validation, and even red teaming the agents themselves, Seker says. “You don’t need a massive new team, but you do need clear accountability for how autonomous decisions are made, tested, and constrained.”Another emerging need is analysts with deep fluency in data management. An AI-driven SOC will require professionals who understand how information should be classified, protected, normalized, and monitored to ensure reliable conclusions. “With 64% of organizations planning to add AI-powered solutions to their security stack in the next year, it is critical for professionals to cross-skill in AI,” Carignan says. “Cybersecurity professionals must become fluent in AI and data, developing a deeper understanding of data classification, governance, and model behavior.” Cross-skills in data science, machine learning, and cybersecurity enable analysts to critically evaluate AI outputs, tune models for security use cases, and adapt defenses as threats evolve, making them indispensable in an AI-augmented SOC.Frank Dickson, an analyst at IDC, urged organizations to think of this capability as similar to a data architect role. “The key to getting value from AI is having data located in a place where you can get to it, having it formatted in a homogeneous way so you can do analysis on it, and then manage the data,” he says. “The success of your AI initiative is going to be tied to the effectiveness of your ability to get data. A data architect manages that.”Dickson also emphasized the need for an “orchestration platform engineer” role responsible for ensuring effective communication and workflow integration across security tools. The SOC of the future will not hinge on a single platform but on an interconnected ecosystem of SIEM, EDR, SOAR, identity, cloud and other systems that must operate in concert to support AI-driven, agentic investigations and automation, Dickson tells. Dedicated orchestration expertise will become essential to maintain reliable data flows and automation logic in such an environment, he noted.
Redesign SOC processes and playbooks where needed: Organizations will need to review and rework SOC processes and playbooks to ensure their AI-augmented SOC is consistent, efficient and continuously learning. Yoran recommends that SOC leaders focus on codifying institutional knowledge into AI agent-accessible questions and plans. Translate playbooks into investigation plans that AI agents can follow on a repeatable basis. In situations where an agent might hit a wall, have processes in place for a smooth handoff to a human analyst and build feedback loops for continuous improvement, Yoran adds.”Playbooks must shift from step-by-step human procedures to intent-based guardrails,” Seker points out. “Instead of telling analysts how to investigate, define what outcomes are allowed, what actions are prohibited, and when human approval is mandatory.”. The objective is not to micromanage every alert but to assume AI agents operate continuously across tools, with humans only supervising exceptions, edge cases, and strategic decisions.SOCs also need to rethink metrics, accountability, and documentation within the SOC. Traditional performance indicators, such as ticket closure rates or mean time to resolution, may need to broaden to include model accuracy, escalation quality, and the effectiveness of automated containment actions. “The biggest mistake is optimizing for speed metrics instead of investigation quality,” Yoran says. “I see this constantly: vendors promising 90% faster time to resolution or reduce tier-one workload by 80% or close alerts in seconds instead of hours. These metrics while seductive are dangerous,” he cautions. “Making the same mistake faster benefits no one. An incomplete investigation that closes in two minutes isn’t better than a thorough investigation that takes 30 minutes.”Auditability too becomes critical. All AI-driven decisions should be traceable, explainable, and reviewable from both an internal governance standpoint and for external compliance requirements. “If you can’t explain why an AI took an action to an auditor, regulator, or executive, it shouldn’t be allowed to take that action. Explainability isn’t a nice-to-have; it’s a prerequisite for autonomy,” Seker says.
Implement AI guardrails and principles: Formal guardrails and operating principles are going to be critical in SOCs where AI agents influence decisions, initiate responses and help prioritize threats. That means setting defined boundaries around data access and model behavior, having processes to validate responses and making sure humans remain in the loop on all high-impact decisions.Focus areas should include approval thresholds for autonomous actions, figuring out allowed and disallowed actions for an agent, protecting against prompt injection attacks, testing and red-teaming of agentic workflows and ensuring IR policies are updated for AI-driven actions. “Require transparent decision trails, rate limiting, least-privilege, and instant override,” Seker advises. “Hard limits on action scope, blast radius, and privilege are non-negotiable. Agents should operate under least-privilege identities, with explicit kill-switches, change-control boundaries, and environment awareness. The key is to ensure that AI is never allowed to silently escalate its own authority or modify guardrails without human approval.”IDC analyst Dickson pointed to identity and access as two other areas to focus on by way of guardrails and policies. “In the past, when we gave humans access, we often over-provisioned by default. That approach does not work with agents. With agentic AI, permissions must start at least privilege, defined precisely from day one.”The focus should be on ensuring no standing privileges, implementing dynamic authorization and establishing clear role definitions, Dickson says. “Agentic AI is enormously powerful. Constraining access correctly is non-negotiable.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4140208/4-ways-to-prepare-your-soc-for-agentic-ai.html
![]()

