Actuaries quantify risk. One of their riskiest endeavors is trying to become one.
Among people taking at least one exam from the Society of Actuaries—the field’s biggest U.S. credentialing body—15% eventually pass the multiple tests required to become an Associate, one of two designations allowing them to practice. Just 10% pass those and additional tests to become a Fellow, the group’s higher designation, which affords bigger responsibilities and salaries.
It’s such an arduous process that the number of test-takers has been declining in recent years, and the society is making changes to keep candidates from dropping out of the gantlet. It is also adding new “predictive analytics” tests to adjust to the massive amounts of data insurers now have.
There is no limit to how many times a candidate can take the tests. It took one man 50 years to become a Fellow, says Stuart Klugman, an official at the society. The society says a candidate typically takes seven to 10 years to become a Fellow. They must pass 10 exams plus other coursework and requirements.
3. Identify pockets of good and poor model performance. Even if you can’t fix it, you can use this info in future UW decisions. I really like one- and two-dimensional views (e.g., age x pension amount) and performance across 50 or 100 largest plans—this is the precision level at which plans are actually quoted. (See Figure 3.)
What size of unexplained A/E residual is satisfactory at pricing segment level? How often will it occur in your future pricing universe? For example, 1-2% residual is probably OK. Ten to 20% in a popular segment likely indicates you have a model specification issue to explore.
Positive residuals mean that actual mortality data is higher than the model predicts (A>E). If the model is used for pricing this case, longevity pricing will be lower than if you had just followed the data, leading to a possible risk of not being competitive. Negative residuals mean A<E, predicted mortality being too high versus historical data, and a possible risk of price being too low.
In machine learning, there has been a trade-off between model complexity and model performance. Complex machine learning models e.g. deep learning (that perform better than interpretable models e.g. linear regression) have been treated as black boxes. Research paper by Ribiero et al (2016) titled “Why Should I Trust You” aptly encapsulates the issue with ML black boxes. Model interpretability is a growing field of research. Please read here for the importance of machine interpretability. This blog discusses the idea behind LIME and SHAP.