Skip to Main Content

Knowledge Center

Innovation

How to Teach a Machine: Artificial Intelligence and Insurance

brain long

Can you teach a machine to learn? The future of insurance – and many other industries – may rest upon the answer.

Just a month ago, data scientists at Google’s DeepMind attempted to find out through the game of chess. The effort revealed just as much about AI’s vulnerabilities as its possibilities.

Google’s AlphaZero artificial intelligence or “AI” system defeated the most advanced chess program in existence. The most interesting part: AlphaZero wasn’t pre-programmed with data on previous games – just basic knowledge of the rules – yet the AI discerned the patterns of play with astonishing speed. True mastery of chess is something that few humans achieve in a lifetime; AlphaZero achieved it in four hours.

This is the promise of AI – a combination of layered computer algorithms known as neural networks that attempt to process information like a human brain, only with exponentially greater efficiency. AI is already a part of our daily lives, responsible for speech recognition on smartphones, automated trading on NASDAQ, and the autopilot of a self-driving car.

And with the success of experiments like AlphaZero, insurers are asking a simple question with increasing urgency: If machines can be taught to think like a chess Grand Master, a trucker or a bond trader, why can’t AI reason like a pricing actuary or a life insurance underwriter?

To find the answer, we must first understand how data scientists teach a machine.

Learning from Knowledge

Insurance companies create detailed manuals that describe processes, such as underwriting. While these documents enable data scientists to understand certain business processes, they’re less helpful to machine-learning algorithms.

But wait … can’t AI systems process text? The short answer: yes, through natural language processing (NLP), certain AI systems can perform tasks after consuming large blocks of textual data. For example, by scanning Wikipedia, such a system could answer questions in a game show like Jeopardy!

It is important to note, however, that these NLP systems only parrot back language patterns, without grasping the underlying meaning well enough to identify or apply complex step-by-step processes from the text. In other words, underwriters could pose questions to an NLP system trained on an underwriting manual, but such a system would lack enough understanding to render decisions based on that same underwriting manual.

Learning from Data

Bottom line: Machines learn from data – not manuals.

The data scientist looks at categories or values that are already present and teaches the machine to extrapolate what might be missing, often through a process called supervised learning. This information could easily come from the underwriting files of insurance applicants, including electronic health records, driving histories, credit reports, and other information pertinent to an underwriting decision.

Of course, data alone is not enough without direction. The data scientist must establish a target. For Google AlphaZero, the input data are the moves that each player makes during a given game of chess, and the target is the winner. Using this data, machine learning models could be built to detect patterns of the moves that result in a win and then drive the system to apply these successful strategies. In an underwriting scenario, a target could be the decision to either underwrite or reject an application, the actual mortality experience of an underwritten case, or even the profitability of an issued policy.

When data exists with no targets, however, the machine does not know what to learn, because it cannot test its decision against an outcome. It simply identifies patterns without a purpose, or clusters similar applicants without being able to render an underwriting result.

Big Data or Infinite Data

So, the more data – and the more targets – the more a machine can learn. Computer scientists have spent years feeding vast amounts of information into machines in an attempt to teach AI networks to solve problems independently – or to learn to think.

Consider the self-driving electric cars manufactured by Tesla. These computerized vehicles continually send back terabytes of target-rich data to their manufacturer. Tesla’s AI learns from every decision by all of its thousands of human drivers, second by second. What caused an accident? How quickly did the vehicle get from point A to point B? How many sudden stops were made? Each piece of data makes all Tesla vehicles smarter.

It’s not uncommon for a human underwriter seeking to decide a case or an actuary seeking to develop pricing assumptions to confront, not too little, but too much potentially useful data. Identifying and analyzing information that would be relevant to risk assessment is often far too time-consuming. Increasingly, insurers have begun to wonder whether machine learning models could be developed to access all available information on the internet, in real time, from medical papers to Twitter feeds, and then help insurers and reinsurers connect the dots to determine risk-rating factors and discover underwriting opportunities.

This is why “big data” is such a big deal in insurance. Many have speculated that AI could confer significant competitive advantages, enabling carriers to increase efficiencies and lower costs without substantially increasing portfolio risks.

Yet AlphaZero successfully demonstrated the ability to master chess without data. If no data is necessary, why do data scientists tend to ask for more? And could an army of robot actuaries or robot underwriters start from nothing and learn underwriting or actuarial science?

Hardly. Chess-playing AlphaZero could use something much better than big data to learn how to check mate; AlphaZero had infinite data, because AlphaZero was playing chess against itself, and learning from every move.

Unfortunately, underwriting and actuarial science are both far more complex than a game of chess; there are far too many variables to consider when evaluating a case or rating a risk. In the science fiction movie The Matrix, an artificial intelligence harnessed vast computational power to simulate an entire world of individuals making lifestyle choices, suffering diseases and accidents, being born, giving birth, and dying. Only with such a controlled simulation could we train a fully “artificial” underwriter to examine all possible decisions, link these choices to outcomes, learn from the results, and try again … and again … and again.

Science Fiction and Science Fact

Life is not a board game – and neither are underwriting and actuarial science. Teaching a machine to assess new human diseases and underwrite unknown conditions would require Matrix-level computational might that is pure science fiction.

In contrast, modern machine learning algorithms are science facts. AI today can teach large computer grids to produce better outcomes, and insurtech companies are harnessing human experience and the power of AI to enhance many insurance business processes. Yet human underwriters and actuaries remain an integral – and essential – part of these processes.

AlphaZero and experiments like it have set the chessboard, and it’s up to new generations of data scientists, underwriters, actuaries, and others to continually play, and improve, the game one move at a time.


Follow Jeff Heaton on Twitter or view his YouTube channel.

The Author

  • Jeff Heaton, Ph.D.
    Vice President and Data Scientist,
    Global Research and Data Analytics

    RGA

Summary

Can you teach a machine to learn? The future of insurance – and many other industries – may rest upon the answer. RGA's Jeff Heaton discusses the promise of AI ... and the limitations. Follow Jeff on Twitter or view his YouTube channel.
  • AI
  • AlphaZero
  • artificial intelligence
  • cognitive computing
  • deep learning
  • Google AlphaZero
  • Heaton
  • Jeff Heaton
  • machine learning