URL has been copied successfully!
The CISO’s 5-step guide to securing AI operations
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

2. Develop a comprehensive and continuous view of AI risks: Getting a handle on organizational AI risks starts with the basics, such as an AI asset inventory, software bills of material, vulnerability and exposure management best practices, and an AI risk register. Beyond basic hygiene, CISOs and security professionals must understand the fine points of AI-specific threats such as model poisoning, data inference, prompt injection, etc. Threat analysts will need to keep up with emerging tactics, techniques, and procedures (TTPs) used for AI attacks. MITRE ATLAS is a good resource here.As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations.

3. Pay attention to an evolving definition of data integrity: You’d think this would be obvious, as confidentiality, integrity, and availability make up the cybersecurity CIA triad. But in the infosec world, data integrity has focused on issues such as unauthorized data modifications and data consistency. Those protections are still needed, but CISOs should expand their purview to include the data integrity and veracity of the AI models themselves.To illustrate this point, here are some infamous examples of data model issues. Amazon created an AI recruiting tool to help it better sort through resumes and choose the most qualified candidates. Unfortunately, the model was mostly trained with male-oriented data, so it discriminated against women applicants. Similarly, when the UK created a passport photo checking application, its model was trained using people with white skin, so it discriminated against darker skinned individuals.AI model veracity isn’t something you’ll cover as part of a CISSP certification, but CISOs must be on top of this as part of their AI governance responsibilities.

4. Strive for AI literacy at all levels: Every employee, partner, and customer will be working with AI at some level, so AI literacy is a high priority. CISOs should start in their own department with AI fundamentals training for the entire security team.Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. Developers should also receive training on AI development best practices, including the OWASP Top 10 for LLMs, Google’s Secure AI Framework (SAIF), and Cloud Security Alliance (CSA) Guidance.End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles.

5. Remain cautiously optimistic about AI technology for cybersecurity: I’d categorize today’s AI security technology as more “driver assist,” like cruise control, than autonomous driving. Nevertheless, things are advancing quickly.CISOs should ask their staff to identify discrete tasks, such as alert triage, threat hunting, risk scoring, and creating reports, where they could use some help, and then start to research emerging security innovations in these areas.Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. There’s a lot of innovation going on, so I believe it’s worth casting a wide net across existing partners, competitors, and startups.A word of caution however, many AI “products” are really product features, and AI applications are resource intensive and expensive to develop and operate. Some startups will be acquired but many may burn out quickly. Caveat emptor!

Opportunities ahead: I’ll end this article with a prediction. About 70% of CISOs report to CIOs today. I believe that as AI proliferates, CISOs reporting structures will change rapidly, with more reporting directly to the CEO. Those that take a leadership role in AI business and technology governance will likely be the first ones promoted.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4011384/the-cisos-5-step-guide-to-securing-ai-operations.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link