Data Solutions
  • Articles
  • September 2021

A Needle in a Haystack: How Can Underwriters Effectively Use Alternative Data Sources to Streamline Risk Analysis?

By
  • Jordan Durlester
  • Jacqueline Waas
Skip to Authors and Experts
Needle in a haystack illustration
In Brief

As insurers shift to incorporate new and alternative data sources, underwriters are taking on more specialized skills. In On the Risk, RGA data strategist Jordan Durlester and underwriter Jacqueline Waas join forces to discuss the value of collaboration in more effectively applying new evidence sources to underwrite new business.

 

New datasets and digital capabilities have emerged, extending the scope of how and where underwriters can analyze and apply their underwriting expertise.

Just 25 years ago, at the beginning of many current senior underwriters’ careers, paper files and DOS computer systems were standard. In time, fully digital and networked underwriting administration systems allowed multiple people to access, edit and review the same files at the same time. These digital systems led to logic-driven underwriting platforms and the ability for clean cases to go through an expedited process. In this era, individuals in the role of new business development entered the data into a rules engine that would in turn determine what level of underwriter the case would be best suited for (from more junior to more senior-level underwriter, depending on case complexities). Advances in technology and data produced additional efficiencies, and fully automated and accelerated underwriting programs have become more commercially viable.

As insurers start to gain access to additional varying sources of information, the underwriter’s role in deciphering this data will become more vital than ever. While access to information has always been essential to successful risk assessment, it is the ability to translate various sources of information into actionable insights that drives progress.

The COVID-19 crisis hastened this evolution, and 2020 brought about a lot of changes and challenges for the industry. Not the least of which was solutioning for how to underwrite in an environment where traditional labs and exams were not as readily avail- able due to pandemic-related restrictions. Beyond such challenges, the insurance industry was pushed by new entrants and insurtech companies to create new and innovative ways to remove friction from the underwriting process.

It Was There All Along

Many companies, both incumbent carriers and reinsurers, as well as startups and insurtechs, are looking to data-driven solutions to meet the changing needs of consumers. These include the incorporation of alternative evidence derived from sources such as electronic health records (EHR), clinical labs and medical claims data to help improve the consumer experience and accelerate the underwriting process.

With streams of new data, the industry is shifting to see if that data can be used in lieu of labs and physician statements to approve cases. Ultimately, the goal is to provide a frictionless and more personalized approach for customers, while simultaneously improving underwriting efficiency without compromising risk assessment.

It is interesting to note much of the data being used to support this evolution has been here since the mid- 60s but was not nationally prioritized until 2004 with the creation of the Office of the National Coordinator (ONC). During the past decade, we have appreciated a large push toward standardizing the exchange of data. Most notably, health care interoperability has taken a big step forward through the adoption of Fast Healthcare Interoperability Resources (FHIR) standards. The FHIR standard, developed with the exchange of the US Core Data for Interoperability (USCDI) in mind, is now widely accepted through US government and EHR providers alike.

Today, handwritten medical records are becoming a thing of the past as they are replaced by medical codes and variable digital data. One of the most impactful areas where innovation is occurring is in the industry’s ability to not only capture the data but to effectively apply the data. Capturing data is one thing; efficiently processing that data is quite another.

Take digital health data (DHD) as an example. The data originates within the health care system and is available through various means from many sources, such as health care provider EHRs, health information exchanges (HIEs), consumer patient portals, health care payers and pharmacy benefit managers (PBMs). The data includes many different components, such as lab data, medications, physician notes, diagnoses, medical procedures and diagnostic study results. The data is also filled with codes and code sets that reflect multiple medical vocabularies, including ICD, SNOMED, HCPCs and CPT. How can the underwriter best receive the information from all these disparate sources for one individual case and apply it to risk analysis? For this information to be useful to the underwriter, an infrastructure must be in place that allows this data to come together in an understandable way. By setting up a data flow and system which pulls in the information from all these sources and synthesizes it in a way for underwriters to review, DHD becomes much more valuable to the underwriting process and can potentially be used in lieu of labs, paramedical exams and attending physician statements (APS), in some instances.

Curation Is Key

Data is the new gold, and the possible underwriting applications leveraging the flood of new data are exciting to consider. However, to make these possibilities a reality, improved methods of data curation will be crucial. If the collection process is not set up appropriately, underwriters may face a deluge of information with limited capacity to sift through it to identify what is relevant. The right tools can streamline and format the information so underwriters can more easily review and apply the data.

Technological advancements have allowed businesses to capture, manage and process big data in an increasingly expansive way. A growing field of work has emerged dedicated to identifying ways to analyze, systematically extract information from, or otherwise handle data sets that are too large or complex to be dealt with by traditional data-processing application software. Through improved data capture, businesses are able to organize and capitalize on data found all around us. New offerings become available more and more frequently, utilizing insights from sources such as social media channels and wearable devices, as well as datasets containing information such as credit and prescription histories. It is important to point out when incorporating any of these, careful consideration is required to make sure that the data is being used responsibly and ethically, insights are accurate, and any models used are unbiased and transparent.

To revisit the DHD example, how the data comes in may look very different depending on the source. DHD comprises hundreds of thousands of codes from various code sets and medical vocabularies. To effectively underwrite a case using DHD, the underwriter must have a deep understanding of the risks associated with each code. For this reason, RGA has developed a DHD scoring tool in response to the insurance industry seeking a way to make the information more manageable.

Another interesting example of how big data can be used and correlated with insurance risk is through the development of programs utilizing credit data for life insurance. Credit scores have been used for many years to simplify the process of buying a car, insuring a house or applying for a credit card, and in recent years new credit-based insurance scores have been applied within the life insurance industry as well. Credit-based insurance scores are FCRA- compliant and take a different view of the client’s behaviors, looking at credit attributes that quantify risk associated with a person’s mortality. This view can help life insurance companies satisfy a variety of internal objectives around underwriting efficiencies, underwriting programs, product performance, target marketing and inforce management.

It is impossible to separate data governance from data strategy. They are inherently dependent upon one another, like a modern-day chicken-and-egg scenario. Much like Alec Baldwin proclaimed in the famous film adaptation of David Mamet’s Glengarry Glen Ross, “Always be governing.” As it pertains to underwriting, there are many key phases and steps from when a case is received and who reviews it, to how it is reviewed and how it is handled after an underwriting decision has been made. A proper data strategy must govern for all these phases – and their fluid transition periods – comprehensively.

For instance, look to medical technology. We know modern mortality is being more significantly impacted by longer-term, chronic illness as opposed to more traditional communicable diseases (aside from COVID-19). Figuring out how to understand and capture early indicators of these chronic illnesses could be groundbreaking for the life and health underwriter. In addition, data streams from the internet of things (IOT)-connected devices could help underwriters understand how to rate chronic illnesses in a more nuanced and personalized manner. This is a game changer, but only if consumer data is governed and protected.

As we write this article, there is a lot of activity taking place to ensure data is being used responsibly. States such as New York and Colorado have either enacted or proposed legislation seeking to prevent unfair discrimination or bias within algorithms and predictive models. Since insurers are regulated at the state level, there is growing concern about how to write and incorporate new data into underwriting algorithms and models in an environment where insurers may move faster than the regulators.

It Is All Part of the Process

In this new age of big data, the problem is not finding and sourcing the data, but rather sifting and filtering it to decide what is important – like looking for a needle in a haystack. If there is too much noise and clutter, underwriters cannot effectively use these reams of data to the benefit of insurers and their customers. Put simply, you can have too much of a good thing.

Thus, data must be processed in a way to ensure underwriters are able to access and use it effectively. Setting up the infrastructure and establishing procedures for how the data team works with the underwriting team are vital. A key aspect of the working relation- ship between data scientists and underwriters is the management of the data.

These channels offer new data sources and combinations; however, without appropriate data management and advanced analytics techniques, the true value cannot be realized. Insights gained from alter- native sources have the potential to transform the insurance purchasing process for consumers around the world while meeting consumers’ protection needs, but the industry must also respect their privacy and act in their best interests.

Looking ahead, utilizing the actionable insights from new data sources will be essential to meeting the growing demand for a frictionless customer journey. Building consumer trust is the cornerstone of success. Only through the consumers’ faith that their information is protected will they feel confident providing that data to insurers and truly embrace the benefits of data-driven products and services. The responsible use of their data requires the constant assessment of practices, especially in a field where things are changing so fast.


More Like This...

Meet the Authors & Experts

Jordan Durlester
Author
Jordan Durlester
Vice President, Data Strategy, Data Strategy and Infrastructure
Jackie-Waas
Author
Jacqueline Waas
Director, Underwriting Research and Development

Additional Resources

Reprinted with permission of ON THE RISK, Journal of the Academy of Life Underwriting (www.ontherisk.com).