URL has been copied successfully!
OpenAI says Codex Security found 11,000 high-impact bugs in a month
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

OpenAI says Codex Security found 11,000 high-impact bugs in a month

From the ‘Aardvark’ experiment to an AI security researcher: Codex Security evolved from an earlier internal project called Aardvark, an AI-powered vulnerability research agent that OpenAI began testing with select users. The concept behind Aardvark was to have the AI agent read code, test possible exploit paths, and reason through how an attacker might compromise a system.This agentic workflow allows the Codex Security system to mimic how human security researchers operate. The AI analyzes repository history, builds a threat model that identifies entry points and trust boundaries, and then explores attack paths that could lead to sensitive outcomes.Once a potential vulnerability is discovered, the system attempts to reproduce the issue in a sandbox environment to confirm that it is exploitable before reporting it. After validation, it generates remediation guidance, often in the form of proposed patches that developers can review and merge into their workflow.Codex Security can also learn from feedback over time to improve the quality of its findings. “When you adjust the criticality of a finding, it can use that feedback to refine the threat model and improve precision on subsequent runs as it learns what matters in your architecture and risk posture,” the company added in the post. Starting March 9, Codex Security is available in research preview to ChatGPT Pro, Enterprise, Business, and Edu customers via the Codex web with free usage for the next 30 days.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4142354/openai-says-codex-security-found-11000-high-impact-bugs-in-a-month.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link