URL has been copied successfully!
Roughly half of employees are using unsanctioned AI tools, and enterprise leaders are major culprits
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Roughly half of employees are using unsanctioned AI tools, and enterprise leaders are major culprits

51% have connected AI tools to work systems or apps without the approval or knowledge of IT;63% believe it’s acceptable to use AI when there is no corporate-approved option or IT oversight;60% say speed is worth the security risk;21% think employers will simply “turn a blind eye” as long as they’re getting their work done.And the C-suite’s own use of shadow tools? That’s a little more difficult to gauge; they’re close-lipped about it, indicating a wider problem, Williams noted. “Senior executives often don’t want to admit they are using AI,” he said. Instead, they’re trying to prove how valuable they are without disclosing their own AI use.Just like workers elsewhere in the enterprise, “senior leaders are able to get more done faster than ever” with AI, he noted. For instance, he said, “you can draft a legal contract in seconds and get a lawyer to review, rather than spend weeks drafting and redrafting using external counsel.”Concerningly, when it comes to the tools workers are using, free versions tend to be the most popular. More than half (58%) of employees using non-approved tools rely on free versions, and 34% of those working at companies that do allow AI tools are also opting for the free version.”Non-paid is almost certainly worse because of the licensing and business models around them,” said Williams. “There is always a cost to using free tools; in this case it’s the value of your data.”And employees are not shy about loading sensitive data into unsanctioned AI tools: 33% admit to sharing enterprise research or datasets; 27% to revealing employee data (such as salary or performance tracking); 23% to inputting company financial information.This becomes dangerous because virtually all free tools use ingested data to train their models, and some of the lower-tiered paid tools do, too, Williams pointed out. “And,” he said, “you cannot get this information back.” Paid enterprise plans typically allow companies to turn off training on their data, but not always. Admins must check this with their large language model (LLM) providers.”The big problem is the loss of intellectual property,” said Williams. And threat actors can get access to this information to profile and target an organization, breach their networks, and exfiltrate confidential data for extortion.”The more data that is disclosed to LLMs, the more information is available [to threat actors] to build a better profile,” Williams noted.

Enterprises must build policies around AI use: Many CEOs have been mandating AI adoption and are allocating capital throughout the business for this purpose, Williams noted. Executives are looking for cost savings as a strategic advantage and a way to quickly return shareholder value.Unfortunately, security is an afterthought, he said. “Many companies have just chosen to ignore the problem, and have decided not to create a policy or see the value in paying for the technology, which is a very big mistake.”Organizations are “flying blind,” and 99% have no way of even knowing what is happening in their environments because there are no products in place to measure it, he observed. This should raise serious red flags for security teams, and there must be greater oversight and visibility into these security blind spots.Williams advised enterprises to audit what is going on inside their systems, measure the scope of the problem, define policies around AI use, and adopt governance frameworks to control it.Further, employees must be made aware of the risks. Many, CISOs included, don’t actually understand the extent of the problem and its broader implications. “Education is essential and doesn’t require a lot of work,” said Williams. On the other hand, implementing a policy and framework does, and enterprises first need to decide what risks they are willing to live with.Ultimately, he said, we are navigating an unprecedented time in history, with new technology advancing at such a rapid pace that the technologists themselves don’t even know where it is going. Enterprises must quickly understand the implications, and use AI responsibly to gain a strategic advantage.”Just as the industrial revolution and the internet changed the way we worked, AI is doing the same,” said Williams. “In fact, we expect this to be an even bigger shift than either of those transitions.”This article originally appeared on CIO.com.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4124775/roughly-half-of-employees-are-using-unsanctioned-ai-tools-and-enterprise-leaders-are-major-culprits-2.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link