Insurance and reinsurance market maker Lloyds has published a new report on how the rapid evolution of generative artificial intelligence (GenAI) is impacting the cyber risk landscape. They say for the time being GenAI’s impact has been minimal, thanks largely to safety mechanisms, but they warn about the proliferation of zero-safety large language models (LLMs).
The new report, Generative AI: Transforming the cyber landscape, provides a brief but very thorough backstory about GenAI’s development over time. It summarizes the LLM landscape and how it is transforming cyber risk, the considerations for business and insurance and discusses how Lloyds plans to develop future solutions.
Substantial underwriting risks
“We see significant opportunities for AI to make life easier for our customers and those using our market, but also substantial risks in the underwriting of AI, where the field continues to change every day. Lloyds is committed to working with insurers, startups, governments and others to develop innovative products and intelligent policy guardrails that can support the development of this important technology,” the report states.
In addition to safety mechanism in place on most LLMs available to the public, they say the financial costs and computational barriers to training and running large language models were significant: “The process of training, fine-tuning, or performing interference with large generative models is computationally intensive, requiring specialized computing hardware components,” they write, adding that a recent model released by Meta required over 3.3-million hours of computation, with total training costs estimated around $10-million (figures in U.S. dollars), for electricity and hardware. “This figure does not consider the acquisition and construction of the data centre itself, or the costs of staff,” they note.
This, however, is changing. “Since February, several advanced technologies have been developed which have dramatically driven down the computational requirements for training, fine-tuning and running interference for these models. Hundreds of LLMs now exist in the wild for a variety of tasks, many of which can be run locally on commodity hardware.”
No meaningful safeguards
They add: “We are entering a period where no meaningful safeguards or harm-reduction curation through centralized ownership and management of LLMs will be applicable to threat actors – an era of proliferation.”
Frameworks in the report include a cyber threat driver analysis, discussions about vulnerability discovery, campaign planning and execution, risk-reward analysis and points of failure. “Overall, it is likely that the frequency, severity and diversity of smaller scale cyber losses will grow over the next 12 to 24 months, followed by a plateauing as security and defensive technologies catch up,” they write.
Cyber catastrophes
“It is highly probable that the frequency of manageable cyber catastrophes will moderately increase,” they add later in the report in a discussion about cyber catastrophes – those which occur when threat actors lose control of covert activities or when the catastrophe is the result of state-backed, hostile cyber activity. “The risk is very unlikely to sharply escalate without massive improvements in AI effectiveness, which current industry oversight and governance make improbable; this is an area where an increased focus from regulators may be helpful.”