Continuous monitoring with human-in-the-loop control: While the first half of the advisory focused on limiting what agents can do, the second was about watching what they actually do, reacting quickly when things go sideways.”Operators should implement continuous monitoring and auditing to maintain awareness of AI agent operation and ensure traceability for decisions and actions,” CISA added. “Continuous auditing processes improve security measures and ensure alignment with governance standards (such as risk management, oversight, and usage restrictions).”CISA and its international partners also recommended integrating human control and oversight into agentic AI workflows to ensure they are approved for non-sensitive, low-risk tasks. For this, the agencies suggested live monitoring during task execution, human approval for decision-making steps, and auditing upon task execution.Experts agree that visibility is critical. “Security teams need continuous visibility into how agents behave, what systems they touch, and when their actions deviate from expected patterns,” said Nick Tausek, Lead Security Automation Architect at Swimlane. “Building human approval into high-risk workflows and automating containment is paramount for taking action when agent behavior crosses a line.”Putting it all together, the advisory detailed core risk areas, from prompt injection and data exposure to tool misuse and privilege creep, urging organizations to lock down privileged access, validate inputs and outputs, monitor agent behavior, and tightly control how these systems interact with data, tools, and other services.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4166479/security-agencies-draw-red-lines-around-agentic-ai-deployments.html
![]()

