Actuarial
  • Articles
  • May 2017
  • 5 minutes

There Is No Partial Credit in Real Life: A Meditation on Actuarial Bias in Group Pricing

Choices
In Brief

Then I joined the working world as an entry-level actuary. In some ways that mindset subconsciously stayed with me. I would race through a project without spending adequate time to check my work and think about whether the conclusion made sense. I realized in a hurry that there is no partial credit in real life.

On many of these exams, if we had to work through a math problem to arrive at a numerical answer, we had to show our work. If, in our haste to complete the problem, we made a simple math error that led to an incorrect numerical answer, yet in showing our work we demonstrated that conceptually we knew how to do the problem, we frequently received partial credit (maybe even most of the credit). This practice can lead to a certain apathy as to whether we arrived at the correct numerical answer and a habit of failing to ask ourselves when we complete a problem, “Does my answer even make sense? Is it about what I would expect?” I know I was guilty of this at times.

Then I joined the working world as an entry-level actuary. In some ways that mindset subconsciously stayed with me. I would race through a project without spending adequate time to check my work and think about whether the conclusion made sense. I realized in a hurry that there is no partial credit in real life.

If we complete a financial calculation in the business world and get the wrong numerical answer, even if we largely applied the correct methodology, our answer is still simply wrong. As a matter of fact, it may not just be wrong – it could lead to material financial losses for the company we work for if it is not caught in time. Rather than receiving any positive partial credit for the work we have done, we very well could receive full blame for the wrong answer.

It Can Happen to Anyone

As we gain more experience in our careers, we get better at avoiding mistakes such as this, but the reality is any one of us can still fall victim to them if we are not careful. One of the most famous (or infamous) examples of this is from a paper published in 2010 entitled, “Growth in a Time of Debt.” It was authored by Carmen Reinhart and Kenneth Rogoff, who are both respected Harvard economists. In 2009 they published their book, “This Time Is Different: Eight Centuries of Financial Folly,” which traced 800 years of financial crises across the globe and the lessons that can be learned from these crises. This book was highly acclaimed and widely read within the financial sector, so when Reinhart and Rogoff followed it up with their paper “Growth in a Time of Debt” the following year, people took note and the paper was granted instant credibility. In this paper, they concluded that as countries increase their amount of public debt as a percentage of GDP, their economic growth rates decline. In particular, they noted that when public debt levels reach a threshold of 90% of GDP, there is a cliff-like drop-off in the average economic growth rate. This result was frequently cited by politicians in the public debate on the need for austerity measures in the wake of the global financial crisis.

A few years after the publication of the “Growth in the Time of Debt” paper, Thomas Herndon, a graduate student at the University of Massachusetts Amherst, was trying to replicate the results Reinhart and Rogoff had produced in their paper for his class assignment. Try as he might, he couldn’t do it. His professors encouraged him to contact Reinhart and Rogoff, which led to their providing Herndon a copy of the spreadsheet they had used. Much to Herndon’s surprise, as he was looking through the spreadsheet formulas, he found an error in a cell that was calculating the average growth rate for countries with a debt/GDP ratio of 90% or greater. The formula read =AVERAGE(L30:L44), including 15 countries, when they clearly had intended to include 20 countries, i.e., =AVERAGE(L30:L49). They had unintentionally excluded five countries from their results. Furthermore, some of the countries correctly included in the average formula didn’t have data in their cells.

In April 2013, Herndon and his professors published a paper responding to the Reinhart and Rogoff paper. While the thesis from the Reinhart and Rogoff paper that higher debt may lead to lower economic growth still held true, the responding paper from Herndon and his professors suggested that the idea that there was some sort of a cliff-like drop-off in economic growth when debt reaches 90% of GDP was not the case. One of the more striking results from the Reinhart and Rogoff paper that had been widely and publicly used in economic policy debates had largely been debunked due to some spreadsheet and data issues. I am sure Reinhart and Rogoff wish they had spent more time reviewing their work and had it peer-reviewed further.

Group Insurance

So how does all of this apply to those of us who work in group insurance? The group insurance world is a world full of imperfect data: data from different sources, data in different formats, and incomplete data. Much of our work requires that we interpret and manipulate this data, so our work is particularly prone to errors and those errors can have material financial implications. Given this, it is a good idea to periodically remind ourselves of the importance of checking our finished work product to make sure the answer is reasonable and the importance of having our work product peer-reviewed, especially if it is a case where there could be real financial consequences.

Reasonableness Checks

Checking our work for reasonableness can take different forms depending on the project. The important thing is to step back and ask ourselves, with an open and critical mind, if our answer makes sense (and not just accept what our model spits out). It may simply involve a checklist of questions we ask ourselves when we finish working on a case. Some of the questions we come up with may even seem somewhat mundane or obvious, but the exercise of going through them may reveal an error we might otherwise have missed.

For example, if we are pricing a group life case, we might ask ourselves:

  • Is the rate I have calculated about what I would have expected for a case of this size and industry? If not, do I have any idea why?
  • If the experience is credible and the experience rate is dramatically different than the manual rate, do I have a thesis as to why, or could there be a data error in the experience rating or manual rating?
  • If after checking everything, does the rate still simply seem too low to pass the smell test? For example, if a case has a fully credible experience life rate that is lower than most cases’ AD&D rates, could the group’s experience realistically be expected to continue at that level? If you were playing with your own money, would you make that bet?
  • If the experience claim rate is trending up, do we have an explanation as to why? Was there an open enrollment? Is it aging faster than might normally be expected? Should we weight experience more to most recent years?
  • Is the claim rate trending down? If so, is the experience legitimately getting better, or is the experience simply not as complete as we are assuming in our experience rating?
  • If the group has retiree coverage and all retirees on the census are much older than 65, is it potentially a closed block of retirees that requires aging, even if it has not been communicated as such?
  • If it is a renewal, does the experience provided with the request for proposal match the experience on our internal company systems? If not, why not?
  • Are the lives and volumes reported on the census consistent with the numbers on the experience report and recent billing statements?

This is just a sampling of the things we might ask ourselves, and, as I mentioned, some of these questions may seem mundane or obvious, but the exercise of going through a mental checklist such as this can help us avoid having one of those Reinhart and Rogoff moments.

Peer Review

Once we have reviewed our own work, having the work peer-reviewed by someone else may help catch any mistakes or anomalies we may have missed. When we review our own work, there can be a tendency to miss our own mistakes. In an article on Wired.com, author Nick Stockton, with help from psychologist Tom Stafford, explored why we do this. His article focuses on typos, but the same concepts can easily be applied to the mathematical work we do.

In the article Stockton notes that our brains are designed to look past the details and focus on the more complex issues. If we are focusing on a pricing exercise that we compiled, our mind may view certain aspects as details and instinctively look right past them. Our peer reviewer may not consider them details and is more apt to check through the detail. Stockton summarizes it well by saying, “By the time you proofread your own work, your brain already knows the destination.” On the other hand, for our peer reviewers, “their brains are on this journey the first time, so they are paying more attention to the details along the way and not anticipating the final destination.”

Asking someone to review our work can make us feel slightly vulnerable as we open ourselves up to criticism. However, having our peer reviewer catch a mistake is much less painful than missing the mistake altogether and having it result in a financial loss or having our error exposed to a much wider audience.

Conclusion

The next time we have completed a project or priced a large case and we are tempted to just go with our answer without giving it the appropriate review, we should remind ourselves of two things: 1) there is no partial credit in real life, and 2) mistakes can happen to anyone. Just ask Reinhart and Rogoff.

More Like This...

Meet the Authors & Experts

Pat Hurley
Author
Pat Hurley

Vice President and Lead Actuary Life, Accident and Special Risk (LASR), U.S. Group Reinsurance