The report analyses the development of mortality assumptions to build mortality tables to better protect retirement income provision. It first provides an international overview of longevity trends and drivers over the last several decades, including the impact of the COVID-19 pandemic. It then explores considerations and traditional approaches for developing mortality tables, and details the standard mortality tables developed across OECD member countries. It concludes with guidelines to assist regulators and supervisors in assessing whether the mortality assumptions and tables used in the context of retirement income provision are appropriate.
The OECD will provide an overview of the publication, followed by a roundtable discussion with government and industry stakeholders. Topics discussed will include:
Recent mortality trends and drivers
How mortality trends/drivers can inform future expectations, and how to account for that in modelling
The challenge of accounting for COVID in setting mortality assumptions
Trade-offs for different modelling approaches
The usefulness of the guidelines included in the report in practice
How to better communicate around mortality assumptions to non-experts
In the PBRAR, VM-31 3.D.2.e.(iv) requires the actuary to discuss “which risks, if any, are not included in the model” and 3.D.2.e.(v) requires a discussion of “any limitations of the model that could materially impact the NPR [net premium reserve], DR [deterministic reserve] or SR [stochastic reserve].” ASOP No. 56 Section 3.2 states that, when expressing an opinion on or communicating results of the model, the actuary should understand: (a) important aspects of the model being used, including its basic operations, dependencies, and sensitivities; (b) known weaknesses in assumptions used as input and known weaknesses in methods or other known limitations of the model that have material implications; and (c) limitations of data or information, time constraints, or other practical considerations that could materially impact the model’s ability to meet its intended purpose.
Together, both VM-31 and ASOP No. 56 require the actuary (i.e., any actuary working with or responsible for the model and its output) to not only know and understand but communicate these limitations to stakeholders. An example of this may be reinsurance modeling. A common technique in modeling the many treaties of yearly renewable term (YRT) reinsurance of a given cohort of policies is to use a simplification, where YRT premium rates are blended according to a weighted average of net amounts at risk. That is to say, the treaties are not modeled seriatim but as an aggregate or blended treaty applicable to amounts in excess of retention. This approach assumes each third-party reinsurer is as solvent as the next. The actuary must ask, “Is there a risk that is ignored by the model because of the approach to modeling YRT reinsurance?” and “Does this simplification present a limitation that could materially impact the net premium reserve, deterministic reserve or stochastic reserve?”
Understanding limitations of a model requires understanding the end-to-end process that moves from data and assumptions to results and analysis. The extract-transform-load (ETL) process actually fits well with the ASOP No. 56 definition of a model, which is: “A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information.” Many actuaries work with models on a daily basis, yet it helps to revisit this important definition. Many would not recognize the routine step of accessing the policy level data necessary to create an in-force file as part of the model itself. The actuary should ask, “Are there risks introduced by the frontend or backend processing in the ETL routine?” and “What mitigations has the company established over time to address these risks?”
Move over, League of Legends. Does anyone even care about Overwatch? No, the real future of esports is spreadsheets and Microsoft Excel. Don’t believe us? Then tune in to ESPN3 or YouTube this weekend to find out.
No, this isn’t a joke. The Financial Modeling World Cup will be held this weekend entirely in Microsoft Excel. And the finals (the quarterfinals, semifinals, and the final match) will all be broadcast live as they happen at 9 AM PT. Everyone’s playing for a total prize of $10,000 — funded by Microsoft, of course.
NPI: NonPharmaceutical Interventions (e.g. masks, social distancing) Epiweek: Epidemiological Week as defined by MMWR LOP: Linear Opinion Pool; method used to calculate Ensemble_LOP and Ensemble_LOP_untrimmed by averaging cumulative probabilities of a given value across submissions. See Notes for more details.
Will COVID-19 cases and deaths surge again this winter? The combined just-released results of 9 models applied to four different scenarios at COVID-19 Modeling Hub project that diagnosed cases could—using the projections of the more hopeful models—drop to around 9,000 cases per day by March. The scenarios range from the most hopeful, with childhood COVID-19 vaccinations and no new viral variant, to one with no child vaccinations and a new variant.
University of North Carolina epidemiologist Justin Lessler, who helps run the hub, tells NPR that the most likely scenario is that children do get vaccinated and no super-spreading variant emerges.
The good news is that about 55 percent of all Americans (181 million) are now fully vaccinated (64 percent of those age 12 and up). Given that unreported COVID-19 cases are generally thought to be considerably higher than the 42 million diagnosed cases, that suggests perhaps around 100 million Americans have developed natural immunity to the virus.
There has been significant disruption in how organisations conduct business and the way we work over the past year and a half. However, financial modellers and developers have had to continue to build, refine and test their models throughout these unprecedented times. Figure 1 below summarises the areas we have covered in the blog series and how they fit together to form the practical guidance of how to follow and implement the Financial Modelling Code.
Not all 10% increases are created equal. And by that we mean, assumption effects are often more impactful in one direction than in the other. Especially when it comes to truncation models or those which use a CTE measure (conditional tail expectation).
Principles-based reserves, for example, use a CTE70 measure. [Take the average of the (100% – 70% = 30%) of the scenarios.] If your model increases expense 3% across the board, sure, on average, your asset funding need might increase by exactly that amount. However, because your final measurement isn’t the average across all the scenarios, but only the worst ones, it’s likely that your reserve amounts are going to increase by significantly more than the average. You might need to run a few different tests, at various magnitudes of change, to determine how your various outputs change as a function of the volatility of your inputs.
If nothing else, having a checklist to go through while working on modeling can help you make sure you don’t miss anything. Hey, ASB, make some handy-dandy sticky note checklists we can stick on our monitors to ask us:
3.1 Does our model meet the intended purpose?
3.2 Do we understand the model, especially any weaknesses and limitations?
3.3 Are we relying on data or other information supplied by others?
3.4 Are we relying on models developed by others?
3.5 Are we relying on experts in the development of the model?
3.6 Have we evaluated and mitigated model risk?
3.7 Have we appropriately documented the model?
Author(s): Mary Pat Campbell
Publication Date: April 2021
Publication Site: The Modeling Platform at the Society of Actuaries
In his youth, the economist Kenneth Arrow analysed weather forecasts for the US Army. When he found that the predictions were as reliable as historical averages, he suggested reallocating manpower. The response from the army general’s office? “The general is well aware that your division’s forecasts are worthless. However, they are required for planning purposes.”
Even before COVID-19, many shared that scepticism of forecasts. The failure to foresee the 2008-09 financial crisis started a debate on economic modelling. Over the past year, the performance of epidemiological models has not resolved this quandary.
Investors have long known that “all models are wrong, but some are useful,” to use the statistician George Box’s pithy idiom. But, there are modellers who use this defence to preserve models beyond usefulness. Meanwhile, there are unrealistic expectations from consumers of models including investors, policymakers and society. They assume that complex issues are easy to forecast, when some things are just unknowable. This gap begs the question of what investors should do.
Forecasters are predicting that U.S. COVID-19 case counts and the U.S. COVID-19 death numbers will continue to improve over the next four weeks.
Most of the forecasters in the COVID-19 Forecast Hub system say weekly new case counts will be somewhere between 350,000 and 450,000 over the next four weeks, compared with an actual number of about 477,000 recorded during the week that ended March 1.
The forecasters are predicting the number of deaths per week will fall to about 6,000 to 8,000, from about 14,000 per week, over that same four-week period.