Two-and-a-half years after Covid-19 emerged, reported infections are way down, pandemic restrictions are practically gone and life in many respects is approaching normal. The labor force, however, is not.
Researchers say the virus is having a persistent effect, keeping millions out of work and reducing the productivity and hours of millions more, disrupting business operations and raising costs.
In the average month this year, nearly 630,000 more workers missed at least a week of work because of illness than in the years before the pandemic, according to Labor Department data. That is a reduction in workers equal to about 0.4 percent of the labor force, a significant amount in a tight labor market. That share is up about 0.1 percentage point from the same period last year, the data show.
There are many books about spreadsheets out there. Most of these books will tell you things like “How to save a file” and “How to make a graph” and “How to compute the present value of a stream of cashflows” and “How to use conjoint analysis to figure out which features you should add to the next version of your company’s widgets in order to impress senior management and get a promotion and receive a pay raise so you can purchase a bigger boat than your neighbor has.”
This book isn’t about any of those. Instead, it’s about how to Think Spreadsheet. What does that mean? Well, spreadsheets lend themselves well to solving specific types of problems in specific types of ways. They lend themselves poorly to solving other specific types of problems in other specific types of ways.
Thinking Spreadsheet entails the following:
Understanding how spreadsheets work, what they do well, and what they don’t do well.
Using the spreadsheet’s structure to intelligently organize your data.
Solving problems using techniques that take advantage of the spreadsheet’s strengths.
Building spreadsheets that are easy to understand and difficult to break.
To help you learn how to Think Spreadsheet, I’ve collected a variety of curious and often whimsical examples. Some represent problems you are likely to encounter out in the wild, others problems you’ll never encounter outside of this book. Many of them we’ll solve multiple times. That’s because in each case, the means are more interesting than the ends. You’ll never (I hope) use a spreadsheet to compute all the prime numbers less than 100. But you’ll often (I hope) find useful the techniques we’ll use to compute those prime numbers, and if you’re clever you’ll go away and apply them to all sorts of real-world problems. As with most books of this sort, you’ll really learn the most if you recreate the examples yourself and play around with them, and I strongly encourage you to do so.
Author(s): Joel Grus
Publication Date: originally in dead-tree form 2010, accessed 29 Oct 2022
The main concern of managers was that their assessors were, like the rest of the population, limited in terms of what they could do to unwind or use to escape due to lockdown restrictions and limited freedom. This contrasted with usual routines.
We asked about the impact of these concerns on the health of claims professionals. Absenteeism within claims teams varied across the companies and while sick leave increased slightly there did not appear to be any significant or concerning trends (Figure 6).
Professionalization leads us to an interesting dilemma. Actuarial culture and, for that matter, organizational culture got insurance companies to where they are today. If the culture were not moderately successful, then the company would not still exist. But this is where Prospect theory emerges from the shadows. It is human nature not to want to lose the culture that enabled your success. Many people nonetheless thirst for the gains earned by moving in a new direction. Risk aversion further reinforces the stickiness of culture, especially for risk-averse professions and industries. Drawing from author Tony Robbins, you cannot become who you want to be by staying who you currently are. Our professionalization, coupled with our risk aversion, creates a double whammy. Practices appropriate to prior eras have a propensity to be locked in place. Oh, but it gets worse!
By the nature of transformation and modernization, knowledge and know-how are embedded in the current people, processes and systems. The knowledge and know-how must be migrated from the prior technology to modern technology. Just like your computer’s hard drive gets fragmented, so too do firms’ expertise as people change focus, move jobs or leave companies. The long-dated nature of our promises can severely exacerbate the issue. Human knowledge and know-how are not very compressible, unlike biological seeds and eggs. In a time-consuming defragmenting exercise, information, knowledge and know-how must be painstakingly moved, relearned and adapted for the new system. This transformation requires new practices, further exacerbating the shock to the culture. Oh, but it gets even worse!
The transformation process requires existing teams to change, recombine or communicate in new ways. This means their cultures will potentially clash. Lack of trust and bureaucracy are the most significant frictions to collaboration among networks. The direct evidence of this is when project managers vent that teams x, y and z cannot seem to work together. It is because they do not have a reference system to know how to work together.
Artificial intelligence (“AI”) adoption in the insurance industry is increasing. One known risk as adoption of AI increases is the potential for unfair bias. Central to understanding where and how unfair bias may occur in AI systems is defining what unfair bias means and what constitutes fairness.
This research identifies methods to avoid or mitigate unfair bias unintentionally caused or exacerbated by the use of AI models and proposes a potential framework for insurance carriers to consider when looking to identify and reduce unfair bias in their AI models. The proposed approach includes five foundational principles as well as a four-part model development framework with five stage gates.
Smith, L.T., E. Pirchalski, and I. Golbin. Avoiding Unfair Bias in Insurance Applications of AI Models. Society of Actuaries, August 2022.
Unfortunately, fraud is rampant in the insurance industry. Property and casualty insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. ML can mitigate this issue by identifying potential claim situations early in the process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim.
5. Claims processing
Claims processing is notoriously arduous and time-consuming. ML technology is a tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.
Adderall and monkeypox vaccine represent only the tip of the iceberg when it comes to drugs now in short supply in the United States — some badly needed by patients who are seriously ill with life-threatening diseases.
Pharmacists tell UPI of scrambling to meet patients’ urgent needs amid current shortages ranging from basics like sterile water and saline to antibiotics, sedatives and cancer-fighting medications.
As of Thursday, the Food and Drug Administration reported 184 drug shortages nationwide. The Association of Health-System Pharmacists put the figure higher, tracking a scarcity of 210 drugs.
Conserving the supply of a drug once healthcare providers know it’s going to become scarce may include setting guidelines for the medication’s use and rationing doses, the Association of Health-System Pharmacists’ Ganio said.
That occurred this spring after GE Healthcare shuttered its facility in China that makes injectable contrast solutions used to highlight CT scan image because of local COVID-19 policies.
“As of now, that shortage has been fixed. But the underlying fragility of the system continues … and there is no reason it couldn’t happen again,” Dr. Matthew Davenport, vice chair of the American College of Radiology’s Commission on Quality and Safety, told UPI in a recent phone interview.
But in a court filing Monday, Jonathan Marks, the deputy elections secretary, acknowledged that a fourth county, Butler, had also refused to count those ballots — and that the county had notified the department three weeks before the lawsuit was filed.
Marks apologized to the court for what he described as an oversight resulting from “a manual process” — a spreadsheet — the department had used to track which counties were counting undated ballots. Butler County was misclassified in the spreadsheet, he said, and from that point forward was left out of the state’s campaign to push counties that hadn’t included them.
The Department of Justice has dropped its investigation into the Pennsylvania Public School Employees’ Retirement System, said Chris Santa Maria, chairman of the $75.9 billion pension fund’s board of trustees, in a statement. PSERS made no further comment on the matter.
The pension fund had been under investigation by the Justice Department since at least May of last year, when subpoenas indicated that the FBI and prosecutors were seeking evidence of kickbacks and bribes at PSERS.
The subpoenas were reportedly looking for information from the pension fund, its executive director, chief financial officer, chief auditing officer and deputy CIO. The court orders reportedly showed that the FBI and prosecutors were probing possible “honest services fraud” and wire fraud.
According to a report released earlier this year following an internal investigation, PSERS investment consultant Aon took responsibility for the accounting error. The report includes a letter from Aon to Grossman that said the firm had become aware of data corruption in some sub-composite market values, cashflows and returns for April 2015.
Aon attributed the data corruption to an error by an analyst in uploading net asset value and cashflow data into the performance system it uses. The company said the data corruption impacted “a few asset class composites” in the public markets.
In the PBRAR, VM-31 3.D.2.e.(iv) requires the actuary to discuss “which risks, if any, are not included in the model” and 3.D.2.e.(v) requires a discussion of “any limitations of the model that could materially impact the NPR [net premium reserve], DR [deterministic reserve] or SR [stochastic reserve].” ASOP No. 56 Section 3.2 states that, when expressing an opinion on or communicating results of the model, the actuary should understand: (a) important aspects of the model being used, including its basic operations, dependencies, and sensitivities; (b) known weaknesses in assumptions used as input and known weaknesses in methods or other known limitations of the model that have material implications; and (c) limitations of data or information, time constraints, or other practical considerations that could materially impact the model’s ability to meet its intended purpose.
Together, both VM-31 and ASOP No. 56 require the actuary (i.e., any actuary working with or responsible for the model and its output) to not only know and understand but communicate these limitations to stakeholders. An example of this may be reinsurance modeling. A common technique in modeling the many treaties of yearly renewable term (YRT) reinsurance of a given cohort of policies is to use a simplification, where YRT premium rates are blended according to a weighted average of net amounts at risk. That is to say, the treaties are not modeled seriatim but as an aggregate or blended treaty applicable to amounts in excess of retention. This approach assumes each third-party reinsurer is as solvent as the next. The actuary must ask, “Is there a risk that is ignored by the model because of the approach to modeling YRT reinsurance?” and “Does this simplification present a limitation that could materially impact the net premium reserve, deterministic reserve or stochastic reserve?”
Understanding limitations of a model requires understanding the end-to-end process that moves from data and assumptions to results and analysis. The extract-transform-load (ETL) process actually fits well with the ASOP No. 56 definition of a model, which is: “A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information.” Many actuaries work with models on a daily basis, yet it helps to revisit this important definition. Many would not recognize the routine step of accessing the policy level data necessary to create an in-force file as part of the model itself. The actuary should ask, “Are there risks introduced by the frontend or backend processing in the ETL routine?” and “What mitigations has the company established over time to address these risks?”
This is the common response when people learn about the US Navy’s Fat Leonard scandal. The high stakes drama and salacious details do seem made for the silver screen, but what’s more surprising is how many people — among them Hill staff, Pentagon budget experts, and other defense policy participants — are unaware of the crimes that proliferated up and down the ranks of the 7th Fleet less than a decade ago. That military leaders, Congress, and the public seem to have forgotten this affair that took down rising leaders, defrauded the US government, and undermined our national security is at least as troubling as the events themselves.
Here’s the short version of events:
The US Navy contracted with Glenn Marine Group (GMG), a ship husbanding company that assisted the Navy with port security, repairs, fueling, restocking and other dockside needs. The president of GMG, Francis Leonard (aka Fat Leonard), overbilled the Navy for things like fresh water and redirected carrier movements to ports where he could charge the most. He bribed officers with $18,000 meals and extravagant hotel stays, prostitutes, parties, cash, and luxury goods. He gained access to sensitive information and paid off people in roles who could help avoid investigations into his activities. Only after the US Department of Justice stepped in — to investigate a suspected mole within the Naval Criminal Investigative Service (NCIS) who was tipping off Leonard — did the enterprise start to unravel.
In 2013, federal agents arrested Leonard in San Diego and charged another 33 people with various crimes, though Leonard’s activities cast a much wider net. In 2018, the Washington Post reported that: “According to the Navy, an additional 550 active-duty and retired military personnel — including about 60 admirals — have come under scrutiny for possible violations of military law or ethics rules.”
The SEC’s complaint, filed in the federal district court in Manhattan, alleges that Structured Alpha’s Lead Portfolio Manager, Gregoire P. Tournant, orchestrated the multi-year scheme to mislead investors who invested approximately $11 billion in Structured Alpha, and paid the defendants over $550 million in fees. It further alleges that, with assistance from Co-Lead Portfolio Manager, Trevor L. Taylor, and Portfolio Manager, Stephen G. Bond-Nelson, Tournant manipulated numerous financial reports and other information provided to investors to conceal the magnitude of Structured Alpha’s true risk and the funds’ actual performance.
Defendants reduced losses under a market crash scenario in one risk report sent to investors from negative 42.1505489755747% to negative 4.1505489755747% — by simply dropping the single digit 2. In another example, defendants “smoothed” performance data sent to investors by reducing losses on one day from negative 18.2607085709004% to negative 9.2607085709004% — this time by cutting the number 18 in half.