A new article from the Reinsurance Group of America (RGA) points out that artificial intelligence (AI) use in insurance is evolving rapidly – more quickly than the terminology to define it – and a workable definition is necessary to guide further regulation.

“It is important for insurers to not only be mindful of the global regulatory environment but also develop their own AI definition to determine which processes should fall under the additional governance AI requires,” the firm states in its note, Defining Artificial Intelligence in Insurance: The challenge and the opportunity

Ability to learn 

They write that when developing regulations, a clearer legal definition is required – one which answers the question of whether or not the technology in question is in fact AI. The ability to learn, they point out, is one of the core pillars for defining AI but a broad definition may therefore include actuarial models that have existed for decades.

The note looks at newer technical definitions from the National Institute of Standards and Technology: This organization defines AI as having the capability to perform functions that are normally associated with human intelligence, including reasoning, learning and self-improvement.

“Unfortunately, without concrete definitions of how humans experience concepts such as reasoning, free will and consciousness, it is even more difficult to define when a machine reaches one of these thresholds,” they write. 

Ethical considerations 

When regulators work to define AI, meanwhile, there are fundamental differences in focus, scope and purpose. Where technical definitions discuss traits and methodologies, regulatory definitions are broader, encompassing potential risks and ethical considerations associated with deployment. Where technical definitions also emphasize capabilities, they say, “in contrast, regulatory definitions focus on the societal impact and governance of AI systems.”

To help, the paper recommends insurers consider materiality – if the failure of the model would result in significant financial or reputational losses – the legal ramifications, how the machines learn (was the logic developed by an actuary or was it learned?) and the human impact of decisions. “Will the decisions of the model affect decisions that are important to human customers?” they ask.

“Insurers cannot rely on a one-size-fits-all regulatory definition. Even within a single country, varying definitions of AI make it impractical for an insurer to simply adopt one. If the insurer is a global entity, the number of regulatory definitions only grows,” they add. “Fortunately, most insurers had a long track record for governing systems and models that influence business decisions.”