Coverage in flux: Phil Karecki, CTO for the insurance sector at managed services provider Ensono, also sees some carriers backing away from covering AI outputs, although he’s not sure whether it’s a major trend. Insurance carriers continuously experiment with how to provide coverage, he notes.Carriers have tried to separate tightly governed AI deployments from more experimental projects when determining whether to provide coverage, he says.”You’ve got this bifurcation of AI, the governed generative and the autonomous pieces,” he says. “It’s no longer, ‘Are you using AI?’ It’s asking, ‘Are you using governed AI? How are you governing it? How are you keeping it safe and secure?’”Carriers have been trying to determine whether covering AI workloads can be profitable for them, Karecki adds. Governed AI tools operating in a bounded decision-making process will be more insurable, while experimental AI systems with no monitoring and no easy rollback will be difficult to cover, he notes.”There’s a repositioning versus a pullback, and that’s very common to the industry, and they will at times open up coverage just to see if it’s this type of insurance that will sell,” he says. “They will assess the results and what needs to change so they can decide whether to re-enter this marketplace or abandon it completely.”In some cases, whether an AI system is insurable may come down to circumstances at individual insurance customers. Carriers in general don’t want to get out of the business of providing insurance, Karecki says.”What they’re working for right now is, ‘How do I make this profitable, and is this sector insurable?’” he says. “They make those decisions on every application regardless, but now, depending upon what they’re being asked to insure, the questions will follow. ‘What are you using AI for? How are you governing it? What risks does that introduce?’”It makes sense that some carriers have begun to question whether to cover AI outputs, given the current level of unreliability of most AI systems, says Dorian Smiley, CTO at Codestrap.”The math says these models should be deterministic, like given the same input, you should get the same output,” he says. “But you can get very different output from the same input, and they can’t know if the answer that they’re giving you is actually correct.”In most cases, AI models lack inductive reason and can’t review their own work, but many organizations are talking about deploying hundreds of autonomous agents and treating them like digital employees, he notes.”The idea that these agents are going to become employees, autonomous people working in your organization, is insane,” he says. “You would never hire a person that can’t learn new information, can’t reliably retrieve information, or check their own work.”NSI’s Bishara has advice for IT and business leaders looking for insurance coverage for their AI workloads: Be honest about how they’re using AI. If they try to hide their AI risks, they risk having their claims rejected when something goes wrong, he says.”If you don’t fully disclose these things appropriately in the way in which you’re functioning and operating, it could be utilized as an excuse to deny a claim at a later date,” he says. “You don’t want a carrier to come back and say, ‘We didn’t underwrite to that risk. We asked these questions, and you didn’t disclose it.’”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4159292/insurance-carriers-quietly-back-away-from-covering-ai-outputs.html
![]()

