In the midst of another crypto winter and seemingly lackluster Metaverse developments, a new tech craze has emerged to capture the fascination of the public: generative artificial intelligence (AI).
This technology shifts AI from simply identifying and classifying existing data to creating new, often human-like content, such as essays, poems, images, music, and videos. Driving awareness of this technology has been the public release of OpenAI’s ChatGPT and image generating apps such as DALL·E, Stable Diffusion, and Midjourney.
Type "write a poem about life insurance" in ChatGPT2 and the following pops up a few seconds later:
Life insurance, a safety net,
For those we love, we won't forget.
A promise to protect, come what may,
Peace of mind, every day.
Naturally this has created considerable hype and media attention. ChatGPT has become the fastest-growing consumer application in history, with an estimated3 100 million active users in January 2023. Artists are up in arms and suing4 Stability AI (the maker of Stable Diffusion) and Midjourney, arguing that their work has been used to train the AI without consent. Schools and academic institutions are already trying to fend off ChatGPT generated submissions – many media articles have highlighted how ChatGPT can generate papers and essays that would pass everything from a high school exam to parts of medical certification. In response, there is a push to develop tools that identify AI-generated content, including GPTzero5 and OpenAI’s AI Text Classifier6, with varying degrees of success.
The fact that ChatGPT sounds impressive should be no surprise: the base version of the machine learning model (GPT-3) was trained using 45 terabytes of data and equipped with 175 billion parameters to make predictions.7
Amid all the hype, it's easy to forget what ChatGPT really is. Users must appreciate that the machine does not “think” and that the results are statistical predictions. Claims of sentience and intelligence have already surfaced. The risk is in giving these systems too much credit and hence placing too much trust in them, leading to expectations that AI cannot currently fulfill. These tools have incredible potential, but humans still need to apply critical thinking when using them.Examples of AI tools:
It’s worth noting that we can expect to see generative AI increasingly become integrated with everyday tools, such as Microsoft Office. Search engines are already testing this, including Google, Baidu, and Microsoft’s Bing8.
Potential Use Cases in Life Insurance
There are many areas where generative AI – or AI more broadly – will likely have an impact on the insurance value chain. In fact, AI is already in use in many instances, such as chatbots selling funeral insurance, but we can expect to see the sophistication and use cases increase significantly moving forward.
It is also important to bear in mind that the models used will not necessarily be the public versions seen today (like ChatGPT), but rather a form of AI adapted and trained on content for a specific domain or company. An insurer would not want their virtual assistant to rely on scraping the internet for product information, for example.
The list below includes just a sampling of the potential applications of AI in insurance.
- Needs analysis and robo-advice
- Product recommendation engines
- Personalized insurance offers – AI is already able to link to third-party data, and generative AI could leverage this to create customized offers based on a personalized insurance structure.
- Voice and chatbot consumer engagement and sales
- Customer segmentation – AI could identify those with a greater propensity to buy insurance and automatically generate an offer or market to them.
- Technical sales content generation: text, images, presentations, video
- Social media posts, blogs, podcast scripts
- Campaign idea generation
- Writing assistance and copywriting
- Personalized web and email content
- Stock photos, videos, and music
- Product brochures and guides
- Logo designs or other artwork
- Social listening – AI could monitor social posts for brand mentions, synthesize data, and compile reports, dashboards, or recommend actions.
- Automatic generation of 3D content, including virtual reality experiences
- Dynamic pricing – AI could leverage increased access to user data, such as data gathered by sensors and wearables, to analyze and generate pricing or offers dynamically – especially in conjunction with other data.
- Simplified issue and limited underwriting products – AI could price these products, which are largely selective of healthier lives and often more expensive given the reduced underwriting, at a more granular level to better segment risk.
- Experience analysis and the parameterization of pricing models – AI could be applied automated dashboards, reporting on in-force portfolios and performing scenario testing on new business premiums.
Product design and development
- Idea generation and brainstorming
- Personalization of products – AI could remove the cost barrier of enabling policy-specific changes or features.
- Competitor analysis and product comparisons
- Facultative optimization– AI could save underwriters time and enable them to focus on complex cases.
- Risk assessment efficiency – AI could use alternative sources of data to improve turnaround times.
- Conversion of unstructured medical information into usable data
- Case synopsis and letter/email generation
- Straight through processing via virtual agents
- Risk data analysis
- Physician engagement – AI could help doctors generate medical reports or complete insurance forms.
- Claims validation using external data
- Claims letters and summaries for assessors
- Processing medical information and employer reports
- Claims reporting for internal analysis
- Assistance with critical illness diagnoses
- Claim submission via virtual agents
- Physician engagement – AI could distill medical information from doctors’ notes and medical reports and submit to insurers with suggested ICD-10 or other codes.
- Fraud prevention and detection