Only one in five workers say their AI use is checked at work. That needs to change

Only one in five workers say their AI use is checked at work. That needs to change

The Star Online - Tech·2025-09-06 11:01

We know plenty of workers are keen to use AI in the office to speed up mundane tasks, and many forward-thinking employers actively encourage AI use to increase productivity. But any breakthrough technology brings risks as well as benefits, and a new survey shows most companies have almost no idea how their workers are using AI. It’s almost impossible to overstate the legal, financial, and reputational hazards that go with unchecked AI use in the workplace.

Data from a survey by New York City-based business advisory firm EisnerAmper shows that only 22% of US desk workers who use AI tools say their company actively monitors how the tech is being used, industry site HRDive reported. That means that roughly eight in 10 workers using AI in the office are using it with little oversight – so even if their employer has safety rules or legal guidelines about using these tools, there’s absolutely no guarantee that the workers are following these guides, and, also, there’s likely no audit trail should something go horribly wrong.

AI tools, as is well known, are not 100% reliable: AI models can hallucinate, or fabricate wholly false answers to user queries, and try to pass the resulting information off as completely truthful. EisnerAmper’s survey, which quizzed over 1,000 workers with a bachelor’s degree or higher who’d used AI in the past year, underlines this: Sixty-eight percent of respondents said they’d regularly encountered mistakes when using AI tech. But somehow this didn’t dampen their enthusiasm for it: Eighty-two percent said they were confident that on the whole, AI tools were giving accurate outputs.

In a news release accompanying the data, Jen Clark, director of technology enablement at EisnerAmper, noted that the results point to a clear “communication gap” between employers and staff about both good and bad AI practices. 

Taken in context with recent reports about how workers are being cavalier with their use of AI in the office, this news should worry almost any employer of any size company, if you’ve deployed AI tools. In May, a Reddit discussion illuminated how workers were using AI in incredibly risky ways: Titled “Copy. Paste. Breach? The Hidden Risks of AI in the Workplace,” the discussion saw IT workers and managers sharing how workers were uploading potentially risky data, which meant many companies were “sleepwalking into a compliance minefield” of potential data theft.

In February, a different report revealed that some workers were so keen to use AI to cut corners that they were basically sneaking their own AI tools into the workplace – either because their company wasn’t furnishing them with any AI system, or they felt that the tools they were allowed to use were inadequate. And in August, another report revealed exactly the kind of risky behaviour that workers were trying with AI tools: In a survey of USworkers, 58% of the respondents said they pasted sensitive company data into AI systems when they were looking for help. The data ranged from client records to potentially private financial information or documents that may contain sensitive company plans. 

The issue here is that AI models are known to “leak” data later on: Uploading a piece of information into a publicly accessible AI chatbot may result in some or all of that information emerging at a later date when a different user makes a relevant query. Similarly, some tools explicitly state that any information shared with the AI is allowed to be used to train future models – meaning that sensitive client or company information may be permanently encoded in the AI. This practice very likely violates customer and client privacy agreements, and could affect a crucial transaction like an IPO that exposes a company to regulatory sanctions and even costly litigation.

What can you do about this?

Firstly, if your company has no AI usage policy, then it’s high time you create one. It doesn’t need to be sophisticated: It could involve selecting AI tools that align with your company’s data privacy goals, and making it clear that your staff can use only those tools, or it could mean allowing your staff to use public-facing tools like ChatGPT, but forbidding them to upload any sensitive business data.

Secondly, you should initiate an ongoing company AI education program. AI tools are evolving all the time, meaning that they’re getting more useful – but each new tool also brings risks. Having an ongoing conversation with your staff about responsible AI use is a must. And it may even be a selling point for your customers, especially if they’re worried about having their data exposed to a third party. – Inc./Tribune News Service

……

Read full article on The Star Online - Tech

Technology Entertainment Malaysia