What impact will AI have on risk pooling?
Nick Reilly, Head of Business Development, UK and North Europe at RNA Analytics considers what impact AI will have on risk pooling …
There has been considerable discussion on the impacts of AI, most of it positive. However, there continue to be those who urge caution. One area that is at risk is the concept of risk pooling, and so, you could argue, the future of insurance.
Surely not, that is just nonsense. Or is it?
Risk pooling is the bedrock of insurance (with insurers traditionally running the risk pool). Many customers pay premiums to cover the few that need to claim. All individuals are (slightly) poorer, but those claiming do not now suffer substantial loss. Example definitions are:
“The premiums of the many pay for the losses of the few” (GAD)
“Aggregating independent risks to make the aggregate more certain” (LSE)
This therefore relies upon a pool of (semi) homogenous lives. In other words, each pool is made up of similar risks.
Looking at term life insurance in the UK:
Insurers have spent a generation working to reduce the headline price for life insurance, by adding ever more risk factors (and extending the time, cost and complexity of underwriting). This adds a competitive edge for the first insurer that can reduce the pool by removing the ‘worst’ risks. The size of the pool is reduced as fewer customers are now treated as being homogenous. This increases the number of lives who are now rated (pay a higher premium) or declined (refused cover).
Pre-approval, using predictive techniques, was a further step, to try to identify the ‘best’ lives in the pool, but without needing to ask the underwriting questions. This approach used data analysis to identify features that would predict the ‘best’ lives. There are many advantages to using this approach, as it greatly reduces underwriting costs, and it offers considerably more convenience for customers. This is countered by the insurers understanding that they would get this wrong occasionally.
Using machine learning (basic AI) to refine this approach, using available data, extends the benefits, and reduces the errors. However, there is a risk that the approach is not to identify the ‘best’ lives, to offer them a better experience, but to identify the ‘worst’ lives.
In this instance, prospective customers could have their application rejected, or their premiums rated, due to data obtained from other sources, if this indicated a higher risk of claiming.
This approach is, of course, subject to legal and regulatory risk for the use of personal data (and possible impacts due to anti-discrimination laws). It also asks the moral question of using machine learning and advanced predictive techniques to pro-actively try to identify the ‘worst’ lives, and to remove them from pools.
Many in InsureTech have had the misguided belief that the ideal situation is to use so much data, that we create ‘an insurance of one’. This would mean, in an ideal world, that your exact risk can be known and priced. Clearly, this is the antithesis of risk pooling. Actuarial models and software can be built to model, price and reserve this level of insurance, but the question to ask is ‘should we?’