URL has been copied successfully!
The new paradigm for raising up secure software engineers
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

The new paradigm for raising up secure software engineers

Threat modeling as a core competency: This system-level thinking should also elevate the need for greater developer fluency in threat modeling, says Yasar. He notes that threat modeling has historically been difficult for product security and engineering teams to operationalize at scale. One of the longstanding barriers to practical threat modeling was the knowledge required to build effective threat models. Teams struggled to understand enough about the organizational context of how applications were being used, the architecture, and the relevant risks to tie it all together and identify the most relevant potential threats.AI may actually help here. By synthesizing organizational context and architectural patterns, AI can make it easier to build threat models that would have previously required extensive manual effort, Yasar says. But while AI can accelerate the mechanics of threat modeling, developers still need to understand the fundamentals: how to think about trust boundaries, how to identify assets worth protecting, and how to anticipate how attackers might abuse a feature. CISOs looking to shift developer training away from vulnerability avoidance may want to start weaving threat modeling skills as a core competency instead.This means that CTOs and CISOs need to help developers and the rest of the engineering team to start to cultivate “threat modeling intuition,” says Michael Bell, founder and CEO of Suzu Labs. “It cannot be a simple ‘does this code work?’ check. But needs to morph into ‘how could this be abused?’,” he says. “We are offloading a large portion of the mental load to write the code, so let’s focus that opened time and opportunity to review the code being output.”Bell believes that building up threat modeling intuition requires a higher level of hands-on and immersive training like work in cyber ranges that shows developers how attackers would target their applications. “As AI handles more of the routine coding work, the human value shifts to judgment,” he says. “Hands-on training builds judgment in a way that lectures and videos don’t.”

Baking training cues into guardrails: The real trick to hands-on training is figuring out how to serve it up to developers in a high-velocity engineering environment. AI-assisted coding is only accelerating workflows and making production expectations even more breathless. A CISO asking to slow things down for training will get considerable side-eye from CTOs under the gun.”Traditional, static, one-time courses don’t work in today’s development lifecycle,” says Pinna. “What’s proving effective is continuous, hands-on training in labs with realistic engineering scenarios. They also need contextual, just-in-time learning.”The emerging approach among secure coding leaders is to blend platform engineering with targeted developer engineering, embedding security guidance directly into the workflows and tools developers already use. Rather than expecting developers to remember what they learned in last year’s training, security teams should be building guardrails that teach as they enforce, Pinna says.”Security teams are creating guardrails that scale across development pipelines,” says Pinna. “These guardrails turn risks into guidance for developers and make sure that automated tools reinforce training. The goal is for training and enforcement to work together, so coming across a guardrail also helps developers understand security principles.”Gupta describes a similar vision: “Instead of expecting users to read documentation, security expectations are built into pipelines, with pop-up explanations justifying the presence of a control and describing how to comply.”It may even expand beyond a pop-up. Delivering on-demand micro-learning in five-, ten-, and fifteen-minute increments based on the exact issue the developer has run into can be incredibly powerful. “The tools I’m using should help me out to learn,” Yasar says.The data from guardrails and controls being triggered can be used by the AppSec team to drive creation and delivery of more in-depth, but targeted education. When the same vulnerability or integration pattern pops up again and again, that’s a signal for focused training on a subject.”AppSec teams play a critical role in connecting automated findings to training,” Bell says. “When the same issue appears repeatedly, that’s a training opportunity.”

The CISO’s new training agenda: Smart CISOs likely already understand that the vibe-coding landscape is going to demand more rather than less security savvy from the dev team. This will require security leaders to work more closely than ever with engineering leadership to influence a shift in the content and delivery mechanisms of security awareness training.Beyond the basics already described here, security pundits say that there’s also another new security training wildcard that CISOs will desperately need to address as AI-assisted coding takes hold within their organization. Developers will now need training in how to work securely within the AI tools themselves.”CISOs need to ask: how can I train my engineers to use AI tools with a security mindset?” says Yasar. “How can I teach them to evaluate and verify what they’re asking and what they’re receiving from these tools? That’s going to come down to governance.”This means working with CTOs and other relevant stakeholders to establish clear policies that define when AI-assisted code requires human review, what types of data can be used with AI tools, and how AI usage is governed before code reaches production. Gupta says organizations are already starting to formalize these rules as part of their broader developer enablement programs.There’s also an opportunity here to finally make good on long-unachieved secure-by-design goals. CISOs can work with engineering teams to use prompt engineering guidance to embed security requirements at the point of code generation. Security teams that offer developers training and ready-made prompt language will help them produce more secure software from the start.”Now I can bake compliance into my prompt. I can build up compliance by design into my architectures,” Yasar explains. “If I’m a developer I can prompt the tool to build me a web login and make sure that web login follows HITRUST compliance guidelines. I can say ‘here are the guidelines in detail.’ That’s going to give us a very good opportunity to insert compliance by design into the prompt itself.”In this way, CISOs can harness the shift to AI-assisted coding in a way that helps build more resilient software than ever.The bottom line is that developer training is here to stay. But CISOs need to put in the work to influence changes that embed security judgment into engineering culture. This means working hand-in-hand with CTOs to weave threat modeling, guardrails, and AI governance wisdom directly into the tools developers use every day.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4129134/the-new-paradigm-for-raising-up-secure-software-engineers.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link