URL has been copied successfully!
The ‘manager of agents’: How AI evolves the SOC analyst role
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

The ‘manager of agents’: How AI evolves the SOC analyst role

From doing the work to directing it: What agentic AI introduces into the SOC is the ability to delegate.Instead of analysts manually gathering evidence and stitching together context, AI agents can now autonomously execute investigative steps: Querying systems, correlating signals and building evidence chains in real time. It doesn’t remove the human from the process. It elevates them within it.The emerging model is one where analysts manage a system of agents, each responsible for a piece of the investigation, rather than performing each step themselves. The human role shifts from operator to orchestrator.What I consistently hear from security leaders isn’t, “I need my analysts to move faster.” It’s, “I need my analysts to stop collecting data and start making decisions based on it.” Those are fundamentally different problems. And the gap between them is where AI creates the most value.

The rise of the ‘manager of agents’: This is where the Tier 1 role evolves, not disappears.In this new model, entry-level analysts are effectively managing a swarm of AI agents. They are responsible for reviewing investigations, validating conclusions and ensuring actions align with business context and risk tolerance.They are not “in the loop” for every step. They are “on the loop”, overseeing outcomes rather than executing tasks.When analysts are forced to stay in the loop, checking every enrichment, every query, every intermediate step, they become a bottleneck. When they move on the loop, they can operate at scale, reviewing dozens or hundreds of investigations with the right level of oversight.This is how trust in AI is built: Not by asking humans to verify everything, but by giving them the visibility to verify anything.Transparency becomes the control plane. Analysts can see exactly what the AI did, how it reached a conclusion and where uncertainty exists. Over time, as accuracy proves out, they naturally increase their level of trust, just as they would with a new colleague joining the team.

Why cybersecurity is different: The fear of job displacement is understandable. In many industries, AI is reducing the need for entry-level roles. Cybersecurity is one of the few domains where AI won’t reduce work. It will expose how much work we’ve been unable to do.The volume and complexity of threats are increasing faster than teams can scale. Attackers are already using AI to automate reconnaissance, generate code and accelerate exploitation. Defenders don’t have the option to sit this out.Threat hunting, detection engineering and control optimization have historically been under-resourced because teams were consumed by alert triage. When AI removes that burden, it creates much-needed capacity for analysts to do what they were trained to do. The work doesn’t shrink. The right work finally gets done.

A new baseline for entry-level talent: This shift also changes what we expect from entry-level analysts.Historically, Tier 1 roles were designed as places where analysts learned by doing repetitive tasks. That model no longer makes sense when those tasks can be automated.The baseline is moving toward understanding how AI systems operate: Interpreting their outputs, questioning their reasoning and guiding their behavior. Human-centric skills become more important, not less. Curiosity, critical thinking and the ability to connect disparate signals into a coherent narrative, these are the differentiators in an AI-driven SOC.We’re already seeing organizations rethink how they hire for these roles. There is less emphasis on credentials and more on how someone thinks and solves problems. When AI handles the mechanics, judgment is the job.

Building trust that holds: If the future is so clear, why is there resistance? In most cases, it comes down to trust, and trust must be earned, not assumed.The deployments I’ve seen fail share a common pattern: Organizations treat AI as a binary shift from no automation to full autonomy. That’s not how security teams work, and it’s not how they should be asked to work.What works is a progression. Start with limited, high-confidence use cases. Provide full transparency into how the system reaches its conclusions. Let analysts validate outcomes before expanding the scope. And critically, put practitioners in the room. Not implementation consultants or project managers, but people who have run SOC shifts, triaged thousands of alerts and earned credibility the hard way.This is why, when we deploy, we bring former SOC leads, threat hunters and detection engineers to work directly alongside analyst teams. They’re not there to configure software. They’re there to build trust in the system, because they’ve already earned trust from the people using it. When analysts see that the people helping them deploy this technology have lived the same grind, the conversation changes. It stops being “will this replace me” and starts being “how do I use this well.”That shift in orientation, from threat to tool, is what separates a successful deployment from one that stalls.The trust gap isn’t a technology problem. It’s a human one. And it closes the same way trust always closes: Through demonstrated competence, shared context and time.

The future SOC is human-led: The end state here is not an autonomous SOC with no humans involved. It’s a human-led SOC, powered by AI.AI agents handle the labor-intensive, evidence-gathering aspects of security operations. Humans provide direction, oversight and accountability. Together, they operate at a speed and scale neither could achieve alone. That’s not a theory, it’s what’s happening in production environments today.

Elevation, not elimination: The narrative that AI will eliminate Tier 1 analysts misses the point. The role isn’t going away. It’s being redefined.The analysts who succeed in this new environment will be those who can manage intelligence systems, interpret complex outputs and make high-quality decisions under uncertainty.They won’t be replaced. They’ll be promoted.This article is published as part of the Foundry Expert Contributor Network.Want to join?

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4163299/the-manager-of-agents-how-ai-evolves-the-soc-analyst-role.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link