Underwriters considering the risks companies are exposed to and most managers of people will want to note: One in five Canadians are using artificial intelligence (AI) tools to help them with their work or studies.

“The survey underscores the need for strong organizational controls and policies and employee education as some users are entering sensitive data into their prompts, not verifying results and claiming AI content as their own,” say KPMG LLP researchers who surveyed 5,140 Canadians, many of whom say the use of AI has enhanced their productivity, created revenue and increased grades. “In the process, they are engaging in behaviour that could create risks for their employers,” they state.

“Organizational guardrails are essential to ensure compliance with privacy laws, client agreements and professional standards,” says Zoe Willis, national leader in data, digital and analytics partner with KPMG Canada’s generative AI practice.

Among the survey’s findings, respondents report entering information about their employer into prompts, including financial data and human resources or supply chain data. Just under half, 49 per cent said they check the results every time to verify the accuracy of the content being generated. This despite the fact that 73 per cent said they are deeply concerned about the hallucinations that generative AI technologies currently churn out.

The survey further found that nearly two thirds of AI’s users claimed the AI-generated content as their own original work all or part of the time; 70 per cent say they will continue to use generative AI tools, regardless of the risks and controversy associated with them. 

“Organizations might need to look at creating proprietary generative AI models with safeguarded access to their own data,” Willis adds. “That’s a critical step to reducing the risk of sensitive data leaking out into the world.”