URL has been copied successfully!
Coming AI regulations have IT leaders worried about hefty compliance fines
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Coming AI regulations have IT leaders worried about hefty compliance fines

CIOs on the forefront: With US states and more countries potentially passing AI regulations, CIOs are understandably nervous about compliance as they deploy the technology, says Dion Hinchcliffe, vice president and practice lead for digital leadership and CIOs, at market intelligence firm Futurum Equities.”The CIO is on the hook to make it actually work, so they’re the ones really paying very close attention to what is possible,” he says. “They’re asking, ‘How accurate are these things? How much can data be trusted?’”While some AI regulatory and governance compliance solutions exist, some CIOs fear that those tools won’t keep up with the ever-changing regulatory and AI functionality landscape, Hinchcliffe says.”It’s not clear that we have tools that will constantly and reliably manage the governance and the regulatory compliance issues, and it’ll maybe get worse, because regulations haven’t even arrived yet,” he says.AI regulatory compliance will be especially difficult because of the nature of the technology, he adds. “AI is so slippery,” Hinchcliffe says. “The technology is not deterministic; it’s probabilistic. AI works to solve all these problems that traditionally coded systems can’t because the coders never thought about that scenario.”Tina Joros, chairwoman of the Electronic Health Record Association AI Task Force, also sees concerns over compliance because of a fragmented regulatory landscape. The various regulations being passed could widen an already large digital divide between large health systems and their smaller and rural counterparts that are struggling to keep pace with AI adoption, she says.”The various laws being enacted by states like California, Colorado, and Texas are creating a regulatory maze that’s challenging for health IT leaders and could have a chilling effect on the future development and use of generative AI,” she adds.Even bills that don’t make it into law require careful analysis, because they could shape future regulatory expectations, Joros adds.”Confusion also arises because the relevant definitions included in those laws and regulations, such as ‘developer,’ ‘deployer,’ and ‘high risk,’ are frequently different, resulting in a level of industry uncertainty,” she says. “This understandably leads many software developers to sometimes pause or second-guess projects, as developers and healthcare providers want to ensure the tools they’re building now are compliant in the future.”James Thomas, chief AI officer at contract software provider ContractPodAi, agrees that the inconsistency and overlap between AI regulations creates problems.”For global enterprises, that fragmentation alone creates operational headaches, not because they’re unwilling to comply, but because each regulation defines concepts like transparency, usage, explainability, and accountability in slightly different ways,” he says. “What works in North America doesn’t always work across the EU.”

Look to governance tools: Thomas recommends that organizations adopt a suite of governance controls and systems as they deploy AI. In many cases, a major problem is that AI adoption has been driven by individual employees using personal productivity tools, creating a fragmented deployment approach.”While powerful for specific tasks, these tools were never designed for the complexities of regulated, enterprise-wide deployment,” he says. “They lack centralized governance, operate in silos, and make it nearly impossible to ensure consistency, track data provenance, or manage risk at scale.”As IT leaders struggle with regulatory compliance, Gartner also recommends that the focus on training AI models to self-correct, create rigorous use-case review procedures, increase model testing and sandboxing, and deploy content moderation techniques such as buttons to report abuse AI warning labels.IT leaders need to be able to defend their AI results, requiring a deep understanding of how the models work, says Gartner’s Clougherty Jones. In certain risk scenarios, this may mean using an external auditor to test the AI.”You have to defend the data, you have to defend the model development, the model behavior, and then you have to defend the output,” she says. “A lot of times we use internal systems to audit output, but if something’s really high risk, why not get a neutral party to be able to audit it? If you’re defending the model and you’re the one who did the testing yourself, that’s defensible only so far.”

First seen on cio.com

Jump to article: www.cio.com/article/4072396

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link