URL has been copied successfully!
How AI is changing the GRC strategy
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Adapting existing frameworks with AI risk controls: AI risks include data safety, misuse of AI tools, privacy considerations, shadow AI, bias and ethical considerations, hallucinations and validating results, legal and reputational issues, and model governance to name a few.AI-related risks should be established as a distinct category within the organization’s risk portfolio by integrating into GRC pillars, says Dan Karpati, VP of AI technologies at Check Point. Karpati suggests four pillars:
Enterprise risk management defines AI risk appetite and establishes an AI governance committee.Model risk management monitors model drift, bias and adversarial testing.Operational risk management includes contingency plans for AI failures and human oversight training.IT risk management includes regular audits, compliance checks for AI systems, governance frameworks and aligning with business objectives.To help map these risks, CISOs can look at the NIST AI Risk Management Framework and other frameworks, such as COSO and COBIT, and apply their core principles, governance, control, and risk alignment, to cover AI characteristics such as probabilistic output, data dependency, opacity in decision making, autonomy, and rapid evolution. An emerging benchmark, ISO/IEC 42001 provides a structured framework for AI for oversight and assurance that’s intended to embed governance and risk practices across the AI lifecycle.Adapting these frameworks offers a way to elevate AI risk discussion, align AI risk appetite with the organization’s overarching risk tolerance, and embed robust AI governance across all business units. “Instead of reinventing the wheel, security leaders can map AI risks to tangible business impacts,” says Karpati.AI risks can also be mapped to the potential for financial losses from fraud or flawed decision-making, reputational damage from data breaches, biased outcomes or customer dissatisfaction, operational disruption from poor integration with legacy systems and system failures, and legal and regulatory penalties. CISOs can utilize frameworks like FAIR (factor analysis of information risk) to assess the likelihood of an AI-related event, estimate loss in monetary terms, and access risk exposure metric. “By analyzing risks from both qualitative and quantitative perspectives, business leaders can better understand and weigh security risks against financial benchmarks,” says Karpati.In addition, with emerging regulatory requirements, CISOs will need to monitor draft regulations, track requests for comment periods, have early warnings about new standards, and then prepare for implementation before ratification, says Marcus.Tapping into industry networks and peers can help CISOs stay across threats and risks as they happen, while reporting functions in GRC platforms monitor any regulatory changes. “It’s helpful to know what risks are manifesting in the field, what would have protected other organizations, and collectively building key controls and procedures that will make us as an industry more resilient to these types of threats over time,” Marcus says.Governance is a critical part of the broader GRC framework and CISOs have an important role in setting the organisational rules and principles for how AI is used responsibly.

Developing governance policies: In addition to defining risks and managing compliance, CISOs are having to develop new governance policies. “Effective governance needs to include acceptable use policies for AI,” says Marcus. “One of the early outputs of an assessment process should define the rules of the road for your organization.”Marcus suggests a stoplight system, red, yellow, green, that classifies AI tools for use, or not, within the business. It provides clear guidance to employees, allows technically curious employees a safe space to explore while enabling security teams to build detection and enforcement programs. Importantly, it also let security teams offer a collaborative approach to innovation.’Green’ tools have been reviewed and approved, ‘yellow’ require additional assessment and specific use cases, and those labelled ‘red’ lack the necessary protections and are prohibited from employee use.At AuditBoard, Marcus and the team have developed a standard for AI tool selection that includes protecting proprietary data and retaining ownership of all inputs and outputs among other things. “As a business, you can start to develop the standards you care about and use these as a yardstick to measure any new tools or use cases that get presented to you.”He recommends CISOs and their teams define the guiding principles up front, educate the company about what’s important and help teams self-enforce by filtering out things that don’t meet that standard. “Then by the time [an AI tool] gets to the CISO, people have an understanding of what the expectations are,” Marcus says.When it comes to specific AI tools and use cases, Marcus and the team have developed ‘model cards’, one-page documents that outline the AI system architecture including inputs, outputs, data flows, intended use case, third parties, and how the data for the system is trained. “It allows our risk analysts to evaluate whether that use case violates any privacy laws or requirements, any security best practices and any of the emerging regulatory frameworks that might apply to the business,” he tells CSO.The process is intended to identify potential risks and be able to communicate those to stakeholders within the organization, including the board. “If you’ve evaluated dozens of these use cases, you can pick out the common risks and common themes, aggregate those and then come up with strategies to mitigate some of those risks,” he says.The team can then look at what compensating controls can be applied, how far they can be applied across different AI tools and provide this guidance to the executive. “It shifts the conversation from a more tactical conversation about this one use case or this one risk to more of a strategic plan for dealing with the ‘AI risks’ in your organization,” Marcus says.Jamie Norton warns that now the shiny interface on AI is readily accessible to everyone, security teams need to train their focus on what’s happening under the surface of these tools. Applying strategic risk analysis, utilizing risk management frameworks, monitoring compliance, and developing governance policies can help CISOs guide the organization in its AI journey.”As CISOs, we don’t want to get in the way of innovation, but we have to put guardrails around it so that we’re not charging off into the wilderness and our data is leaking out,” says Norton.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4016464/how-ai-is-changing-the-grc-strategy.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link