URL has been copied successfully!
The risks of entry-level developers over relying on AI
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

The risks of blind spots, compliance and license violation: As generative AI becomes more embedded in software development and security workflows, cybersecurity leaders are raising concerns about the blind spots it can potentially introduce.

“AI can produce secure-looking code, but it lacks contextual awareness of the organization’s threat model, compliance needs, and adversarial risk environment,” Moolchandani says.

Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default.Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses. “This is particularly dangerous in the context of software development for cybersecurity tools, where compliance with open-source licensing is not just a legal obligation but also impacts security posture,” O’Brien adds. “The risk of inadvertently violating intellectual property laws or triggering legal liabilities is significant.”From a technological perspective, Wing To, CTO at Digital.ai, points out that AI-generated code should not be seen as a silver bullet. “The key challenge with AI-generated code, in security and other domains, is believing that it is of any better quality than code generated by a human,” he says. “AI-generated code runs the risk of including vulnerabilities, bugs, protected IP, and other quality issues buried in the trained data.”The rise in AI-generated code reinforces the need for organizations to adopt best practices in their software development and delivery. This includes consistently applying independent code reviews and implementing robust CI/CD processes with automated quality and security checks.

Changing the hiring process: Since generative AI is here to stay, CISOs and the organizations they serve can no longer afford to overlook its impact. In this new normal, it becomes necessary to set up guardrails that promote critical thinking, foster a deep understanding of code, and reinforce accountability across all teams involved in any kind of code writing.Companies should also rethink how they evaluate technical skills during the hiring process, particularly when recruiting less experienced professionals, says Moolchandani. “Code tests may no longer be sufficient, there needs to be a greater focus on security reasoning, architecture, and adversarial thinking.” During DH2i’s hiring process, Ngo tells they assess candidates’ dependence on AI to gauge their ability to think critically and work independently. “While we recognize the value of AI in enhancing productivity, we prefer to hire employees who possess a strong foundation in fundamental skills, allowing them to effectively use AI as a tool rather than relying on it as a crutch.”Don Welch, global CIO at New York University, has a similar perspective, adding that the people who will thrive in this new paradigm will be the ones who stay curious, ask questions, and seek to understand the world around them as best as they can. “Hire people where growth and learning are important to them,” Welch says.Some cybersecurity leaders fear that becoming over reliant on AI can widen the talent shortage crisis the industry already struggles with. For small and mid-sized organizations it can become increasingly difficult to find skilled people and then help them grow. “If the next generation of security professionals is trained primarily to use AI rather than think critically about security challenges, the industry may struggle to cultivate the experienced leaders necessary to drive innovation and resilience,” Hasnis says.

Generative AI must not replace coding knowledge: Early-career professionals who use AI tools to write code without developing a deep technical foundation are at a high risk of stagnating. They might not have a good understanding of attack vectors, system internals, or secure software design, says Moolchandani. “Mid-to-long term, this could limit their growth into senior security roles, where expertise in threat modelling, exploitability analysis, and security engineering is crucial. Companies will likely differentiate between those who augment their skills with AI and those who depend on AI to bridge fundamental gaps.”Moolchandani and others recommend organizations increase their training efforts and adjust their methods of transferring knowledge. “On-the-job training has to be more hands-on, focusing on real-world vulnerabilities, exploitation techniques, and secure coding principles,” he says.Mattson says that organizations should focus more on helping employees gain relevant skills in the future. Technology will evolve quickly and training programs alone may not be enough to keep pace. “But a culture of continuous skill improvement is durable for any change that comes,” Mattson adds.These training programs should help employees understand both the strengths and limitations of AI, learning when to rely on these tools and when human intervention is mandatory, says Hasnis. “By combining AI-driven efficiency with human oversight, companies can harness the power of AI while ensuring their security teams remain engaged, skilled, and resilient,” he says. He advises developers to always question AI outputs, especially in security-sensitive environments.  O’Brien also believes that AI should go hand in hand with human expertise. “Companies need to create a culture where AI is seen as a tool: one that can help but not replace a deep understanding of programming and traditional software development and deployment,” he says.

“It’s essential that companies don’t fall into the trap of just using AI to patch over a lack of expertise.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3951403/the-risks-of-entry-level-developers-over-relying-on-ai.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link