Although Canadian litigation efforts are likely a fraction of those underway in the United States, a new webinar from AM Best, Defending Artificial Intelligence Claims, provides a vantage point from which industry watchers can begin to gauge the effects stemming from the broader adoption of artificial intelligence (AI) in business and insurance.
The emerging legal issues, AI’s role in detecting and preventing fraud, unintended discrimination and the specific steps carriers should take to protect themselves were all part of the panel discussion hosted by the ratings agency.
AI, they point out, is ubiquitous today. In insurance it is being used in claims, with some companies providing apps that allow claimants to submit accident photos and receive an estimate, in some cases even before the police arrive. Chatbots are communicating with insureds and claimants and some clients have monitoring (telematics) on their cars. In underwriting too, AI is being used to alert and assign a higher risk score for underwriters when applicants are cutting and pasting information or taking too long to answer questions they should know right away, such as their date of birth.
“It is actively being used in claims. It is actively being used in insurance and in litigation,” says Maria Abate, shareholder with Colodny Fass. “It’s hard to find the balance as to when to trust the AI.”
Erroneous decisions
There is also ongoing debate as to who is liable when an AI system causes harm or makes erroneous decisions. Among the examples cited, they point out that an algorithm which works well on a homogenous population will provide very different results when applied to a more diverse population set.
In one 2016 example, GEICO settled a California Department of Insurance investigation claiming that the insurer improperly discriminated based on gender, educational attainment and occupation when quoting insurance rates, as their algorithm included how much money the applicant was making and what zip code they lived in. “You would have a bank teller getting a 19 per cent higher rate than the bank executive because of their education and what they were making, and that doesn’t have anything to do with how well you drive,” Abate points out.
Inherent bias
In another example, she points out that Amazon, over 10 years ago, built an algorithm to screen resumes in an effort at hiring the best and the brightest. Because the algorithm included salary considerations, the “best and the brightest” were all white males. “That doesn’t necessarily mean that those individuals were the best and the brightest, there was an inherent bias in the data because, as we all know, women and minorities typically get paid less for the same jobs,” she says. “That’s already baked into the data and exists. If we’re not cognizant of it, it’s just going to create more issues.”
Finally, a State Farm case in Illinois alleges that black homeowners’ claims were being delayed and flagged at a much higher rate than white homeowners’ claims. “There’s never any intention in that, but they need to look at those algorithms,” Abate says. “This is the type of exposure that insurance companies are going to be looking at.”
Data security and protection – privacy – is a concern for a lot of insurers, they add.
Interestingly too, they point out that AI accuracy degrades over time, for various reasons – degradation which can go unnoticed until something catastrophic happens or when a large number of people are affected. “Make no mistake, AI is great in a lot of ways but it could also be a detriment,” says Marshall Dennehey shareholder, Matt Keris. “We need to keep our eye on it.”