RGA

The Ethics of Algorithms in Insurance

Enrique Ruiz
Managing Director
RGA Spain and Portugal

Is it acceptable to discriminate against people based on the data we have about their lives? On the Internet there is a lot of public information about our clients, current and potential - much more than we thought. To what extent can we take advantage of it?

The Fingerprint

It is difficult to know the level of detail that circulates in the market about our lives. Our fingerprint is very deep, formed by the traces we leave when using the Internet. You do not have to be very active online to still leave traces of your tastes, trips, purchases and more, even if you don’t share photos, follow sites or “like” an application. The most advanced tracking tools record each movement a user makes on website with a mouse, even recording which parts of the screen the cursor rests on the most, even without a click.  

When you sign up for Twitter, your Tweets are set as public by default, which means that anyone can see them. Even more: if you authorized a third-party application to access your account, it is possible that the third-party application will see your protected Tweets.

Comments on Facebook leave a record, and hiding your comments on a publications does not prevent them from being visible to any other user with access to that publication.

Make no mistake: every time we are online, the Internet is listening. We must accept that the information on the Internet is a good database about us, and that information allows us to be classified by different criteria.

New business models based on Big Data

This information helps companies guide content towards specific markets and consumers, and helps advertisers follow our movements through multiple websites. The business model that lies behind most web services is based on providing a service completely free of charge, in exchange for the data collected from users. This information is profitable for marketing services that direct personalized advertising campaigns for those who wish to advertise a product or service to specific groups of people. 

What consequences can this have? So far, people do not seem overly worried. If we people on the street, surely a high percentage of people would tell us that a few personalized ads do not bother anyone.

We are not very worried because most of us think that our anodyne life cannot interest anyone. Some may think that a company having this knowledge about us just means they will send us discounted offers on their products.  Beyond simply giving up some privacy, and receiving some personalized ads, there are many gray areas. For example, it can be quite annoying that just for visiting a website they send us an e-mail asking why we have not bought something. Or having a visit to a physical store result in a call from a robot asking a series of questions about our purchases. Where do we draw the line?

In some areas, consumers are already accustomed to merchants using data about us, such as in the area of ​​credit. The goal, ultimately, is to assess the credit risk, and society accepts that the credit history is a sign of personal responsibility that can be applied in other areas.

That general acceptance is not so clear when it comes to the predictive models of "black boxes" that Big Data makes possible. Classifying a potential client according to credit history may not discriminate based on race or religion, but there is a lot of personal information on the Facebook page of an average user that can very well serve that purpose. If a prediction model collects this information, it is normal to expect users to express a high level of indignation, and for regulators to take action on the matter.

Personal data protection

Users are not alone in defending our rights. The authorities take care to protect us. In Europe, Article 5.3 of Directive 2002/58 / EC regulates the requirements for informing the user and obtaining the consent of the clients when collecting data.

In Spain, data processing using fingerprint techniques of the device must follow the criteria set out in the 'Cookies Use Guide' of the Spanish Agency for Data Protection (AEPD) and the provisions of the General Data Protection Regulation in the processing of personal data.

It is increasingly common for life insurers to use so-called "non-traditional" sources of public data, including credit data, court documents, vehicle records, etc. to feed the algorithms that calculate a score when buying insurance. However, very few insurers use data from social networks, among other things, perhaps out of an abundance of caution.

Now, by its very essence, all insurance prices are discriminatory; the important thing is to know when it really matters. In the context of life insurance, it is clear that older people will pay more than younger people, but we all agree with the soundness of this standard. So, is it okay to discriminate because of smoking? Would it be good to discriminate against people who use WhatsApp more than 50 times a day if we knew that they tend to die younger?

Pioneering regulators

In the U.S., New York is the first state to allow insurers to use the data collected from social networks when pricing based on the behavior of their customers. Of course, companies must demonstrate that their algorithms do not discriminate unfairly. For this reason, the regulator of that state has published a circular with a series of guidelines to advise life insurers of their legal obligations with respect to the use of data.

This circular establishes that an insurer should not use data sources, algorithms, or predictive models for the underwriting unless the insurer has determined that the processes do not collect or use prohibited criteria and that their use does not discriminate unfairly.

In addition, the insurer must establish that external data sources, algorithms, or predictive models are based on sound actuarial principles with an explanation or valid reason for any declared correlation.

Standing still is not an option

There is a considerable opportunity for the insurance sector in incorporating in our systems of selection and pricing of so-called "non-traditional" data. We will have to act very cautiously, but not doing anything cannot be an option. Customers ask us for more and more personalized products and services, which implies that they will be willing for us to use public information for this purpose.

What then are the permitted ways to use personal data? I do not know if we have definitive answers yet, but it seems clear that they could be used whenever a company does not discriminate by factors that we admit are socially inadmissible, such as race, religion or sexual orientation.

Transparency and ethical use of the data is vital. Companies must do what they can to be transparent and help consumers understand what data they are collecting and for what purpose. Companies that are honest and generate trust in society will be successful.


Reprinted from ADN del Seguro

 

June 2019