Will COVID-19 cases and deaths surge again this winter? The combined just-released results of 9 models applied to four different scenarios at COVID-19 Modeling Hub project that diagnosed cases could—using the projections of the more hopeful models—drop to around 9,000 cases per day by March. The scenarios range from the most hopeful, with childhood COVID-19 vaccinations and no new viral variant, to one with no child vaccinations and a new variant.
University of North Carolina epidemiologist Justin Lessler, who helps run the hub, tells NPR that the most likely scenario is that children do get vaccinated and no super-spreading variant emerges.
The good news is that about 55 percent of all Americans (181 million) are now fully vaccinated (64 percent of those age 12 and up). Given that unreported COVID-19 cases are generally thought to be considerably higher than the 42 million diagnosed cases, that suggests perhaps around 100 million Americans have developed natural immunity to the virus.
There has been significant disruption in how organisations conduct business and the way we work over the past year and a half. However, financial modellers and developers have had to continue to build, refine and test their models throughout these unprecedented times. Figure 1 below summarises the areas we have covered in the blog series and how they fit together to form the practical guidance of how to follow and implement the Financial Modelling Code.
Not all 10% increases are created equal. And by that we mean, assumption effects are often more impactful in one direction than in the other. Especially when it comes to truncation models or those which use a CTE measure (conditional tail expectation).
Principles-based reserves, for example, use a CTE70 measure. [Take the average of the (100% – 70% = 30%) of the scenarios.] If your model increases expense 3% across the board, sure, on average, your asset funding need might increase by exactly that amount. However, because your final measurement isn’t the average across all the scenarios, but only the worst ones, it’s likely that your reserve amounts are going to increase by significantly more than the average. You might need to run a few different tests, at various magnitudes of change, to determine how your various outputs change as a function of the volatility of your inputs.
If nothing else, having a checklist to go through while working on modeling can help you make sure you don’t miss anything. Hey, ASB, make some handy-dandy sticky note checklists we can stick on our monitors to ask us:
3.1 Does our model meet the intended purpose?
3.2 Do we understand the model, especially any weaknesses and limitations?
3.3 Are we relying on data or other information supplied by others?
3.4 Are we relying on models developed by others?
3.5 Are we relying on experts in the development of the model?
3.6 Have we evaluated and mitigated model risk?
3.7 Have we appropriately documented the model?
Author(s): Mary Pat Campbell
Publication Date: April 2021
Publication Site: The Modeling Platform at the Society of Actuaries
In his youth, the economist Kenneth Arrow analysed weather forecasts for the US Army. When he found that the predictions were as reliable as historical averages, he suggested reallocating manpower. The response from the army general’s office? “The general is well aware that your division’s forecasts are worthless. However, they are required for planning purposes.”
Even before COVID-19, many shared that scepticism of forecasts. The failure to foresee the 2008-09 financial crisis started a debate on economic modelling. Over the past year, the performance of epidemiological models has not resolved this quandary.
Investors have long known that “all models are wrong, but some are useful,” to use the statistician George Box’s pithy idiom. But, there are modellers who use this defence to preserve models beyond usefulness. Meanwhile, there are unrealistic expectations from consumers of models including investors, policymakers and society. They assume that complex issues are easy to forecast, when some things are just unknowable. This gap begs the question of what investors should do.
Forecasters are predicting that U.S. COVID-19 case counts and the U.S. COVID-19 death numbers will continue to improve over the next four weeks.
Most of the forecasters in the COVID-19 Forecast Hub system say weekly new case counts will be somewhere between 350,000 and 450,000 over the next four weeks, compared with an actual number of about 477,000 recorded during the week that ended March 1.
The forecasters are predicting the number of deaths per week will fall to about 6,000 to 8,000, from about 14,000 per week, over that same four-week period.