The insurance industry is under increasing pressure to safeguard personally identifiable information (PII) against sophisticated data privacy risks, say authors of a new blueprint from the Info-Tech Research Group, Safeguard Your Data When Deploying AI in Your Insurance Systems.

In the blueprint, the global research and advisory company provides a framework, including downloadable risk maps, capability maps and security policy templates, for addressing the growing challenges of data privacy in the face of artificial intelligence (AI) developments. It recommends AI training for all employees, strong data governance practices and proactive risk management to safeguard PII when using AI for underwriting, claims processing and customer engagement.

Outdated legacy systems 

“Traditional system safeguards and outdated legacy systems are proving insufficient to address the complexities of modern AI-driven processes, leaving insurers exposed to regulatory and technological vulnerabilities,” they write in a statement about the blueprint’s publication. They add that legacy systems also often lack the flexibility to meet modern demands while unfamiliarity with AI among employees will also cause confusion when assessing risks and determining appropriate applications. “Regulatory requirements, which may not align with AI-driven processes, further heighten compliance challenges,” they write. 

Three key risks 

The three key risks tied to generative AI developments include data breaches of PII, noncompliance with data protection and privacy regulations and insider threats where employees or third-party contractors with access to AI systems cause damage, either intentionally or through negligence.

“Insurers are unaware of PII risk points and how to comply with them. AI (also) presents new challenges for your organization. It does not have formal acceptable use policies, your employees do not fully understand how it works, and your data security did not consider applications like it,” the blueprint states. “Regulatory requirements can be complex and may not align seamlessly with AI processes, leading to compliance risks.” 

Data integrity is also discussed in the document which says “data integrity risk comes from repeatedly using unverified Gen AI outputs. A single output with faulty data may not cause much trouble, but if these low-quality outputs are added to databases, they may compromise the integrity of your records over time.” At the same time, they point out that companies without control over an AI’s output presents a risk of disclosure to unauthorized parties.

Slow reaction times 

According to Info-Tech, PII is the primary target of most breaches. Causes of sensitive information loss in global business in 2023 were careless users in 70.6 per cent of cases, compromised systems 48.1 per cent of the time, misconfigured systems 45.3 per cent of the time and malicious employees or contractors were the cause reported in 20 per cent of cases. Slow reaction times, they add, also damage revenue and company reputation when breaches occur. (They say it takes organizations an average of 204 days to identify a data breach and 73 days to contain it.)

To start addressing the challenge, they recommend using the downloadable documents to determine which insurance-specific risks apply to the organization, use a continuous improvement approach and train employees, both to understand the risks that AI can introduce and also to influence overall company culture.

“Assess risk and define acceptable use. These are the first key steps to AI security improvement,” they state. “Reevaluate your data security controls and plan necessary improvements.”