
For many years, insurance was beautifully simple, its models and underlying capabilities clearly defined. Policies were developed and priced by actuaries. Biometric underwriting data was collected during the application process, and premiums were calculated to provide optimal protection.
The biometric data sets were, at best, incomplete, and as they were collected at a single time point, the longer a policy was in force, the more out-of-date the data would become. There’s nothing bad or negative about this: it is just how underwriting was done, and the industry has always made the most of the data available.
Nowadays, the types of data available to insurers has expanded tremendously. People are digitally connected in ways that were not possible even a few years ago. Completely novel information is coming from both static sources and continuous live streams. Mobile phones are in virtually every pocket or purse, and wearable devices, many of which double as fashion statements, are tracking and charting every step, breath and heartbeat. Meanwhile, IoT (Internet of Things) sensors are speedily transforming people’s relationships with their external physical environments, resulting in rich pools of data that are allowing their actions and action patterns to be extrapolated from and utilised as never before.
All of these data streams are being added to the sizeable pools of historical information being collected from current and legacy policy administration and claims management systems, and are already enabling efficiencies such as predictive models that can correlate certain data streams as well as provide far more underwriting speed and simplicity.
However, it can be challenging to not only make sense of, but also navigate and interpret, these vast commingled pools of traditional and new data. Currently, cloud systems are easing storage and aggregation needs, and sophisticated computer models coupled with fast-evolving technological capabilities are enabling complex analyses of traditional and non-traditional data at high speeds. Insurers so empowered can adopt the latest tools and techniques to develop cutting-edge models so that pricing accuracy can improve, customised products can become possible, and overall customer experience can change for the better.
Still, insurer data needs to be clearly understood, defined, and of a quality that will let actuaries and underwriters use it with confidence, as it will be used to support models and decisions that will be locked in for years, if not decades. Insurers are aware they need to learn about and adapt to this new abundance (some might say overabundance) of data, and many companies are already engaged in doing so. If this could be done easily – or, alternatively, if the right data were easily accessible – new frameworks such as dynamic pricing, dynamic underwriting and real-time claims adjudication and payment might be just as simple to use as swiping a card. This, however, is not the case, and for several good reasons.
Consider the various costs today of owning and using data: In the past, storage and computing were the dominant costs, but as these have become more efficient, their prices have dropped and will most likely continue to do so. Today’s true costs are two: that of developing and maintaining the software that processes the data and provides administration for its systems; and that of making sure the right people are in place to work with the data. Those people must be professionals who understand data domains as well as corporate needs, can develop the most beneficial platforms and solutions for the needs, and can collect and administer the right data appropriately. It is the rare company that is not scouring for professionals with the technology and data science proficiency to know what needs to be known. It’s a hard task, and getting harder every day.
The insurance industry has clearly reached an inflection point. Read More +