top of page

A Reality Check for Predictive Models

My actuarial career began with the naive belief that anything could be mathematically modeled, that model-based prediction was only limited by the sophistication of our models and the degree of our intellect. By now I’ve built mathematical models for insurance companies, investment firms, and hedge funds for over a decade, all of which involved predictions and substantial amounts of money. My experience has given me a healthy dose of skepticism regarding what can and cannot be modeled and successfully predicted. Furthermore, if history is a guide, predictive models built by actuaries will inevitably result in unexpected financial losses. Thinking back on my earlier experiences with predictive models, I get visions of the “ivory tower” us actuaries stood in without realizing it (including myself). What concerns me now is that many actuaries still remain unaware of the ivory tower in which they currently stand. For me, realizing this didn’t come from new mathematical techniques, consultants, technology, or building more truth-seeking machines. I figured it out only after building predictive models whose results had consequences on my personal balance sheet. Knowing the dollar amount I could personally lose betting on the accuracy of my predictive models forced me to dig deep and discover new insights I hadn’t realized before. This essay will detail what I’ve learned while using predictive models to make decisions under opacity. More specifically, it will provide an overview of the key risks faced by actuaries deploying the explosion of available data, computing power, and breakthroughs in applied mathematics to building complex predictive models to an ever-expanding universe of potential applications.

 

​

Risk I: Predictive Modeling is Not Mathematics

​

Mathematics is concerned with logical deductions of facts. Said another way, mathematics proves theorems (tidy if-then true-false statements). Predictive models may discover certain properties, but they will never have theorems. Mathematics is abstractions of things in the real world, whereas predictive models deal with the real world itself. Like many academics who’ve been intoxicated by their models, actuaries are prone to conflating their models with the real-world events in question. Consider the quote on our FSA credentials: “The work of science is to substitute facts for appearances and demonstrations for impressions.” Mathematics certainly does this by proving facts in a hypothetical world. Unfortunately, predictive modeling will never deliver material facts about the real world in which humans behave. Finding truths in complex systems involving humans requires intuition, experiment design, and getting your hands dirty.

The unintended consequence of confusing predictive modeling with mathematics will be overconfidence in models or failing to find more robust alternative solutions involving things beyond mathematics. In either case, financial losses will result from actuaries confusing the difference. Recall the four-letter word, LTCM, and the mathematically elegant Black-Scholes formula (taught on actuarial exams today). At the time, it was widely believed that this model cracked the code of financial markets. It wasn’t until a $3.6 billion-dollar bailout supervised by the Federal Reserve for many, including the Nobel Laureates who developed the model, to realize that complex system involving human behavior cannot be explained by mathematics alone.

​

​

Risk II: Predictive Models Don’t Predict Tails

​

When I was fresh out of UCLA, I worked for a Seattle-based property casualty insurer. One morning staring at a spreadsheet, I read “Church Reserves” typed in bold next to “Asbestos Reserves.” The church reserves existed for the settlement of sexual abuse claims resulting from misconduct of priests. Since I had attended Catholic school for much of my childhood, I couldn’t help but think about the impossibility of a mathematical model predicting this. It just wouldn’t have been in the data at the time. Ironically though, the interconnectedness and availability of information increases the severity of these hard to predict risks that predictive models do nothing to address. Well documented is the massive imprecision in the measurement of tails and second order effects inherent in complex dynamic systems. Predictive models will fit past data with precision on paper, but, by their nature, tails are impossible to calibrate in practice. What’s worse is that not accounting for tails can cancel all the benefits of predictive modeling for more frequent events in the first place. 

The critical factor behind the largest single-day stock market crash on October 19, 1987 (Black Monday) can be traced back to quantitative portfolio insurance models failing to account for low frequency second order effects. Just like church claims, no amount of data or methodologies will help forecast these risks. The predictive models used for both portfolio and liability insurance didn’t account for contagion: the viciously reinforcing cycles that feed on themselves. A lawsuit filed by the first claimant fuels the next, the next claim fuels many more, lawsuits won means more lawsuits filed, and the cycle continues. Similarly, portfolio insurance algorithms selling futures when markets drop causes markets to fall further, which in turn requires more selling, more selling means markets go down further, and the cycle continues. Although tail risks like these have only increased because of the interconnectedness of everything in the information age, predictive models inherently discount them. Tails and second order effects cannot be predicted by predictive models.

​

​

Risk III: The Dangers of Naive Extrapolations

​

Many years after receiving my FSA, I worked for a life insurer in California as an investment actuary. At the time, “predictive modeling” wasn’t the popular vernacular it had been years earlier in my property casualty days—until one afternoon, spread across the conference table were glossy pitch decks filled with impressive bios, beautiful designs, and prestigious corporate logos. Consulting actuaries were visiting excited to discuss the “new technology” that would revolutionize the accuracy of how policyholder decision-making was handled in our models. These were extremely important assumptions needed to calculate the cost of our liabilities. If wrong, these assumptions could cost billions due to the stock market guarantees we sold based on them. The meeting with these actuaries took me back to when I worked with predictive models for property casualty insurance. It was the terminology, “maximum likelihood, link functions, generalized linear...” I immediately knew it was nothing novel since I’d done this modeling a decade prior. More intriguing though, was the consultant’s extrapolation that predictive models originally used for auto insurance pricing, would work for predicting the decisions made by humans well into the future—complex decisions policy holders would make under severe information asymmetries. What’s worse, I knew the more we relied on the accuracy of the predictive model’s ability to forecast, the larger the losses would be when they turned out wrong. Our pricing would get more aggressive if we thought we had a better handle on the distribution of their decision. What I believe the consultants didn’t understand is that the necessary knowledge for improved prediction is often very localized, context-specific, and idiosyncratic. And, more important than the science loved by the consultants is the understanding of the constantly changing details. Said another way, more important than the models themselves is where and when they will pay off when the “correct” answer keeps changing. The knowledge needed to plug into these problems cannot be found in more data and better models, because top-down models could never cope with the ubiquity of ever-changing dynamics. 

Actuaries believing in the “extrapolations of experts” pose substantial risk. It’s not possible to know in advance what models will generate the best results. Actuaries relying on naive extrapolations for building predictive models will eventually cause losses. Recall David Li, the modern day “famous actuary” cited on Wikipedia. Li hit upon a predictive model that helped fuel the financial crisis of 2007–2008. Li had extrapolated that joint life distributions for dying spouses was well suited for modeling credit default swaps. It was Li’s predictive model that supplied a method for bundling many different swaps in a CDO that could output the tranche-level correlations. With just a bit more mathematical alchemy, securities priced using his predictive model on subprime CDOs were rolling off the assembly line with AAA ratings. Li’s model is not just an example of a failed model. It went much further, in fact it fueled the growth of the market itself and its inevitable crash in spectacular fashion. Stretching predictive models to ever-increasing applications looks very similar to Li’s naive application of the Gaussian copula formula to mortgages. Note that Gauss originally devised his formula to measure the motion of stars.

​

​

Managing Predictive Model Risk

​

The financial disasters that predictive models will inevitably create cannot be managed by traditional means. Mistaking predictive modeling for mathematics, discounting tail events, and using naive extrapolations are the key risks actuaries face embracing predictive modeling in practice. For each risk, this essay points out how glorified models with serious defects contributed to massive destruction. In all cases mentioned, predictive models aided and abetted putting our financial security systems at risk. Despite this, predictive modeling marches forward in an over-hyped fashion unperturbed by real-life lessons of the past. I believe profound change is needed, and the actuarial profession should be laser-focused on delivering solutions to address these risks. Acting otherwise is tantamount to declaring ourselves impervious to empirical real-world developments. As much as actuaries focus on the usefulness of quantitative models, we need to get a better handle on their dangers. Presenting predictive modeling as scientific breakthroughs should be stopped. Merely footnoting embedded unrealistic assumptions that have led to past financial disasters should be stopped. Unrealistic assumptions and their historic track records should be explored explicitly. In addition, more attention should be paid to how things could be done in the absence of mathematical models to determine the risk and rewards of implementing quantitative models in the first place. Model governance, enhanced systems, or additional exams on techniques will not remedy the risks I’ve outlined in this essay. Yet the power of an approach to address these risks would do more to advance the actuarial profession than predictive modeling ever could. It is here, when actuaries step off their ivory towers and do more than apply quantitative risk management techniques that real innovation will occur.

​

​

​

bottom of page