In the PBRAR, VM-31 3.D.2.e.(iv) requires the actuary to discuss “which risks, if any, are not included in the model” and 3.D.2.e.(v) requires a discussion of “any limitations of the model that could materially impact the NPR [net premium reserve], DR [deterministic reserve] or SR [stochastic reserve].” ASOP No. 56 Section 3.2 states that, when expressing an opinion on or communicating results of the model, the actuary should understand: (a) important aspects of the model being used, including its basic operations, dependencies, and sensitivities; (b) known weaknesses in assumptions used as input and known weaknesses in methods or other known limitations of the model that have material implications; and (c) limitations of data or information, time constraints, or other practical considerations that could materially impact the model’s ability to meet its intended purpose.
Together, both VM-31 and ASOP No. 56 require the actuary (i.e., any actuary working with or responsible for the model and its output) to not only know and understand but communicate these limitations to stakeholders. An example of this may be reinsurance modeling. A common technique in modeling the many treaties of yearly renewable term (YRT) reinsurance of a given cohort of policies is to use a simplification, where YRT premium rates are blended according to a weighted average of net amounts at risk. That is to say, the treaties are not modeled seriatim but as an aggregate or blended treaty applicable to amounts in excess of retention. This approach assumes each third-party reinsurer is as solvent as the next. The actuary must ask, “Is there a risk that is ignored by the model because of the approach to modeling YRT reinsurance?” and “Does this simplification present a limitation that could materially impact the net premium reserve, deterministic reserve or stochastic reserve?”
Understanding limitations of a model requires understanding the end-to-end process that moves from data and assumptions to results and analysis. The extract-transform-load (ETL) process actually fits well with the ASOP No. 56 definition of a model, which is: “A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information.” Many actuaries work with models on a daily basis, yet it helps to revisit this important definition. Many would not recognize the routine step of accessing the policy level data necessary to create an in-force file as part of the model itself. The actuary should ask, “Are there risks introduced by the frontend or backend processing in the ETL routine?” and “What mitigations has the company established over time to address these risks?”
At least 20 older climate models disagreed with the new one at NCAR, an open-source model called the Community Earth System Model 2, or CESM2, funded mainly by the U.S. National Science Foundation and arguably the world’s most influential. Then, one by one, a dozen climate-modeling groups around the world produced similar forecasts.
The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. “The old way is just wrong, we know that,” said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. “I think our higher sensitivity is wrong too. It’s probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another.”
Since then the CESM2 scientists have been reworking their algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire — and still in flux.
Skeptics have scoffed at climate models for decades, saying they overstate hazards. But a growing body of research shows many climate models have been uncannily accurate. For one recent study, scientists at NASA, the Breakthrough Institute in Berkeley, Calif., and the Massachusetts Institute of Technology evaluated 17 models used between 1970 and 2007 and found most predicted climate shifts were “indistinguishable from what actually occurred.”
Still, models remain prone to technical glitches and are hampered by an incomplete understanding of the variables that control how our planet responds to heat-trapping gases.
Having taken all the modelling into account, SAGE produced a table that showed in stark terms what the future held if the government stuck to ‘Plan B’. With the usual risible caveat that ‘these are not forecasts or predictions’, they showed a peak in hospitalisations of between 3,000 and 10,000 per day and a peak in deaths of between 600 and 6,000 a day. In previous waves, without any vaccines, deaths had never exceeded 1,250 a day.
The government was effectively given an ultimatum. SAGE offered Johnson a choice between the disaster that would surely unfold and a ‘Step 1’ or ‘Step 2’ lockdown, both of which had been helpfully modelled to give him a steer. ‘Step 1’ was a full lockdown as implemented last January. ‘Step 2’ allowed limited contact with other households but only outdoors.
In the event, as we all know, Boris Johnson ignored the warnings and declined to implement any new restrictions on liberty. A few days later, Robert West, a nicotine-addiction specialist who is on SAGE for some reason, tweeted: ‘It is now a near certainty that the UK will be seeing a hospitalisation rate that massively exceeds the capacity of the NHS. Many thousands of people have been condemned to death by the Conservative government.’
It did not quite turn out that way. Covid-related hospitalisations in England peaked at 2,370 on 29 December and it looks like the number of deaths will peak well below 300. This is not just less than was projected under ‘Plan B’, it is less than was projected under a ‘Step 2’ lockdown. The modelling for ‘Step 2’ showed a peak of at least 3,000 hospitalisations and 500 deaths a day. SAGE had given itself an enormous margin of error. There is an order of magnitude between 600 deaths a day and 6,000 deaths a day and yet it still managed to miss the mark.
A U.S. judge on Wednesday narrowed but refused to dismiss a Securities and Exchange Commission lawsuit accusing Morningstar Inc. of letting analysts adjust credit rating models for about $30 billion of mortgage securities, resulting in lower payouts to investors.
U.S. District Judge Ronnie Abrams in Manhattan said the SEC plausibly alleged that Morningstar Credit Ratings failed to provide users with a general understanding of its methodology for rating commercial mortgage-backed securities and lacked effective internal controls over its ratings process.
Not all 10% increases are created equal. And by that we mean, assumption effects are often more impactful in one direction than in the other. Especially when it comes to truncation models or those which use a CTE measure (conditional tail expectation).
Principles-based reserves, for example, use a CTE70 measure. [Take the average of the (100% – 70% = 30%) of the scenarios.] If your model increases expense 3% across the board, sure, on average, your asset funding need might increase by exactly that amount. However, because your final measurement isn’t the average across all the scenarios, but only the worst ones, it’s likely that your reserve amounts are going to increase by significantly more than the average. You might need to run a few different tests, at various magnitudes of change, to determine how your various outputs change as a function of the volatility of your inputs.
It is a wonder that nobody choked on their morning toast and tea, for if Imperial modelling has stood for anything in this crisis, it is relentless pessimism. Plummeting figures were certainly not predicted by its researchers. The difference this time is that the Government has pressed ahead with reopening despite the doom-mongering, and so has proven the models wrong.
Here is what they said would happen and what we know now: Hospital admissions When the Government published its roadmap out of the pandemic on Feb 22, it was largely based on modelling assumptions from Imperial, the London School of Hygiene & Tropical Medicine and Warwick University.
Imperial modelled four unlocking scenarios, ranging from “very fast” to “gradual”. Under the fastest, full lifting would occur at the end of April, while under the slowest, Britain would not see restrictions eased until Aug 2.
In the end, the Government chose a path somewhere between “fast” and “medium”, yet the Imperial model predicted that would still lead to Covid hospital bed occupancy of about 15,000 to 25,000 in the summer and early autumn – which was higher than the first peak in April 2020.
“There’s only one model that we look at that has the number of projected deaths which is the IHME model which is funded by the Gates Foundation,” Cuomo said on April 2, adding, “and we thank the Gates Foundation for the national service that they’ve done.”
In an April 9 briefing, Michigan Governor Gretchen Whitmer referred to the IHME model in order to project deaths and the PPE resources needed for the supposed surge.
It was the same story with the government of Pennsylvania. The PA Health Department exclusively uses IHME models to forecast coronavirus outcomes.
Governor Phil Murphy, another nursing home death warrant participant, used IHME models to navigate the state’s policy response.
For a few months last year, Nigel Goldenfeld and Sergei Maslov, a pair of physicists at the University of Illinois, Urbana-Champaign, were unlikely celebrities in their state’s COVID-19 pandemic response — that is, until everything went wrong.
Following the model’s guidance, the University of Illinois formulated a plan. It would test all its students for the coronavirus twice a week, require the use of masks, and implement other logistical considerations and controls, including an effective contact-tracing system and an exposure-notification phone app. The math suggested that this combination of policies would be sufficient to allow in-person instruction to resume without touching off exponential spread of the virus.
But on September 3, just one week into its fall semester, the university faced a bleak reality. Nearly 800 of its students had tested positive for the coronavirus — more than the model had projected by Thanksgiving. Administrators had to issue an immediate campus-wide suspension of nonessential activities.