Welcome to another episode of Positivity with Paul, where I find Fellow Actuaries – pun intended – for a conversational Q&A on their life. The focus is on their journey along the actuarial exam path and beyond, some of the challenges they faced, and how those challenges helped shape them to become who they are today.
To give some brief context on becoming an Actuary, there’s a number of actuarial exams that one has to go through. These exams are very rigorous and typically, only the top 40% pass at each sitting, They cover complex mathematical topics like statistics and financial modelling but also insurance, investments, regulatory and accounting. Candidates can study up to 5 months per sitting and they will take 7 to 10 years on average to earn their Fellowship degree. To that end, I launched this series of podcasts because I was curious about what drove my guests to surmount trials and tribulations to get to the end goal of becoming an Actuary.
My guest in this interview is Mary Pat Campbell. Mary Pat is an actuary working in Connecticut, investigating life insurance and annuity industry trends. She has been interested in exploring mortality trends, public finance and public pensions as an avocation. Some of these explorations can be found at her blog: stump.marypat.org. Mary Pat is a fellow of the Society of Actuaries and a member of the American Academy of Actuaries. She has been working in the life/annuity industry since 2003. She holds a master’s degree in math from New York University and undergraduate degrees in math and physics from North Carolina State University. In this podcast, Mary Pat discusses similarities in concepts between physics and actuarial science, the current low interest rate environment and lessons learnt in the insurance sector from the financial crisis in 2008-2009. Hope you enjoy this all-inclusive interview! Paul Kandola
Formal citation: James E. Ciecka. 2008. Edmond Halley’s Life Table and Its Uses. Journal of Legal Economics 15(1): pp. 65-74.
Halley obtained demographic data for Breslau, a city in Silesia which is now the Polish city Wroclaw. Breslau kept detailed records of births, deaths, and the ages of people when they died. In comparison, when John Graunt (1620-1674) published his famous demographic work (1662), ages of deceased people were not recorded in London and would not be recorded until the 18th century.
Caspar Neumann, an important German minister in Breslau, sent some demographic records to Gottfried Leibniz who in turn sent them to the Royal Society in London. Halley analyzed Newmann’s data which covered the years 1687-1691 and published the analysis in the Philosophical Transactions. Although Halley had broad interests, demography and actuarial science were quite far afield from his main areas of study. Hald (2003) has speculated that Halley himself analyzed these data because, as the editor of the Philosophical Transactions, he was concerned about the Transactions publishing an adequate number of quality papers. 2 Apparently, by doing the work himself, he ensured that one more high quality paper would be published.
1) The FIO was created in the wake of the financial crisis, as part of the Dodd-Frank Act. It has since been active on two fronts: as a source of information about the insurance industry for the U.S. Department of the Treasury and other branches of government, and as a representative of the insurance industry in international negotiations.
2) The FIO has had a challenging first decade. Since its launch, insurers have been concerned that the introduction of a new federal body, like all bureaucracies, is the camel’s nose in the tent, which would eventually lead to attempted expansion of its scope. Today, even though many have come to accept the FIO—provided it does not attempt to exceed its authority—there are still efforts to abolish it.
3) In the past, government restrictions of the free market with involvement in insurance have proven inefficient and anticompetitive. Should the FIO advance legislative attempts to address “affordability and accessibility” of insurance, it will likely contribute to the disruption of an efficient private market closely regulated at the state level.
SOA leadership and members discuss the University-Earned Credit (UEC) program. Watch this recording of the May 24 member town hall about UEC. If you have any additional questions email us at email@example.com. Learn about the UEC program by visiting https://www.soa.org/education/resources/uec/uec-program/
You can’t compare results from Bayesian and frequentist methods because the results are different kinds of things. Results from frequentist methods are generally a point estimate, a confidence interval, and/or a p-value.
In contrast, the result from Bayesian methods is a posterior distribution, which is a different kind of thing from a point estimate, an interval, or a probability. It doesn’t make any sense to say that a distribution is “the same as” or “close to” a point estimate because there is no meaningful way to compute a distance between those things. It makes as much sense as comparing 1 second and 1 meter.
In courtrooms, mixing up the probability of “A given B’” with “B given A” is known as the “prosecutor’s fallacy”. In 1999, a court convicted Sally Clark of the murder of her two sons, in part because a medical expert claimed the chance of two accidental cot deaths was one in 73m. Even if this number was right – which it isn’t – it did not reflect the chance she was innocent. A double murder was also very rare: the relative likelihood of the two explanations was key and with new evidence and better statistical reasoning, an appeal court quashed the conviction.
There was controversy after a recent Observer headline referred to Bayes’s theorem as “obscure”. His ideas may be little known by the public, but they are growing among scientists. Many complex analyses done during the pandemic have been “Bayesian”, including modelling lockdown effects, the ONS infection survey, and Pfizer-BioNTech’s vaccine trial. The term “credible interval”, rather than “confidence interval”, is the giveaway.
Last week, Cass Business School announced the renaming of its institution after Bayes and his theorem. The obscure tomb in nearby Bunhill Fields is worth a visit.
If nothing else, having a checklist to go through while working on modeling can help you make sure you don’t miss anything. Hey, ASB, make some handy-dandy sticky note checklists we can stick on our monitors to ask us:
3.1 Does our model meet the intended purpose?
3.2 Do we understand the model, especially any weaknesses and limitations?
3.3 Are we relying on data or other information supplied by others?
3.4 Are we relying on models developed by others?
3.5 Are we relying on experts in the development of the model?
3.6 Have we evaluated and mitigated model risk?
3.7 Have we appropriately documented the model?
Author(s): Mary Pat Campbell
Publication Date: April 2021
Publication Site: The Modeling Platform at the Society of Actuaries
In his youth, the economist Kenneth Arrow analysed weather forecasts for the US Army. When he found that the predictions were as reliable as historical averages, he suggested reallocating manpower. The response from the army general’s office? “The general is well aware that your division’s forecasts are worthless. However, they are required for planning purposes.”
Even before COVID-19, many shared that scepticism of forecasts. The failure to foresee the 2008-09 financial crisis started a debate on economic modelling. Over the past year, the performance of epidemiological models has not resolved this quandary.
Investors have long known that “all models are wrong, but some are useful,” to use the statistician George Box’s pithy idiom. But, there are modellers who use this defence to preserve models beyond usefulness. Meanwhile, there are unrealistic expectations from consumers of models including investors, policymakers and society. They assume that complex issues are easy to forecast, when some things are just unknowable. This gap begs the question of what investors should do.
For example, I once joined a team maintaining a system that was drowning in bugs. There were something like two thousand open bug reports. Nothing was tagged, categorized, or prioritized. The team couldn’t agree on which issues to tackle. They were stuck essentially pulling bugs at random, but it was never clear if that issue was important.. New bug reports couldn’t be triaged effectively because finding duplicates was nearly impossible. So the open ticket count continued to climb. The team had been stalled for months. I was tasked with solving the problem: get the team unstuck, get reverse the trend in the open ticket count, come up with a way to eventually drive it down to zero.
So I used the same trick as the magician, which is no trick at all: I did the work. I printed out all the issues – one page of paper for each issue. I read each page. I took over a huge room and started making piles on the floor. I wrote tags on sticky notes and stuck them to piles. I shuffled pages from one stack to another. I wrote ticket numbers on whiteboards in long columns; I imagined I was Ben Affleck in The Accountant. I spent almost three weeks in that room, and emerged with every bug report reviewed, tagged, categorized, and prioritized.