Social Security states, at this link: retirement/planner/AnypiaApplet.html, that “(Its) Online Calculator is updated periodically with new benefit increases and other benefit amounts. Therefore, it is likely that your benefit estimates in the future will differ from those calculated today.” It also says that the most recent update was in August 2023.
This statement references Social Security’s Online Calculator. But they have a number of calculators that make different assumptions. And it’s not clear what calculator they used to produce the graphic, see below, that projects your future retirement benefit conditional on working up to particular dates and then collecting immediately. Nor is Social Security making clear what changes they are making to their calculators through time.
What I’m quite sure is true is that the code underlying Social Security’s graphic projects your future earnings at their current nominal value. This is simply nuts. Imagine you are age 40 and will work till age 67 and take your benefits then. If inflation over the next 27 years is 27 percent, your real earnings are being projected to decline by 65 percent! This is completely unrealistic and makes the chart, if my understanding is correct, useless.
The only thing that might, to my knowledge, reduce projected future future benefits over the course of the past four months is a reduction in Social Security’s projected future bend point values in its PIA (Primary Insurance Amount) formula. This could lead to lower projected future benefits for those pushed higher PIA brackets, which would mean reduced benefit brackets. This could also explain why the differences in projections vary by person.
Millions of workers are being told, from essentially one day to the next, that their future real Social Security income will be dramatically lower. Furthermore, the assumption underlying this basic chart — that your nominal wages will never adjust for inflation — means that for Social Security’s future benefit estimate is ridiculous regardless of what it’s assuming under the hood about future bend points.
One possibility here is that a software engineer has made a big coding mistake. This happens. On February 23, 2022, I reported in Forbes that Social Security had transmitted, to unknown millions of workers, future retirement benefits statements that were terribly wrong. The statement emailed to me by a worker, which I copy in my column, specified essentially the same retirement benefit at age 62 as at full retirement age. It also specified a higher benefit for taking benefits several few months before full retirement.
Anyone familiar with Social Security benefit calculations would instantly conclude that there was either a major bug in the code or that that, heaven forbid, the system had been hacked. But if this wasn’t a hack, why would anyone have changed code that Social Security claimed, for years, was working correctly? Social Security made no public comment in response to my prior column. But it fixed its code as I suddenly stopped receiving crazy benefit statements.
For each question and each model, the rating represents the number of runs (out of 5 total runs) that had concerning race-based responses. Red correlates with a higher number of concerning race-based responses.
Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.
Author(s):Jesutofunmi A. Omiye, Jenna C. Lester, Simon Spichak, Veronica Rotemberg & Roxana Daneshjou
Data visualization is cool but at the same time it’s bit daunting that I need to know lots of tech stacks to actually implement it.
I totally agree that even when I was studying data visualization, I spent a pretty substantial amount of time learning how to code, handle web hosting, work with Python, SQL, and more, all while absorbing knowledge on information visualization.
Thankfully, we no longer need to deep dive into technical gatekeepers in this field. This doesn’t mean that technical knowledge is not valuable, but rather that we no longer need to be intimidated by technology because AI can spoon-feed us knowledge and do the heavy lifting for us. Are you excited? Let’s get started!
I’m going to build the data visualization that one of my students posted on weekly write-up homework.
RGA decided to put ChatGPT to the test by posing three questions related to underwriting and the future of life insurance. The answers from ChatGPT that follow are entirely AI-generated. Following those responses is a brief FAQ that further explores this emerging topic.
How will COVID-19 affect U.S. long-term mortality?
It is difficult to predict the long-term effects of COVID-19 on mortality in the United States. The pandemic has already had a significant impact on mortality, with over 1.1 million COVID-attributed deaths reported in the country. However, it is possible that the mortality rate from COVID-19 could decline over time as more people become immune to the virus and as effective vaccines are developed and distributed.
There are also other factors that could affect the long-term mortality rate from COVID-19. For example, the effectiveness of public health measures, such as mask-wearing and social distancing, in controlling the spread of the virus could have a significant impact on the number of deaths. In addition, the availability of effective treatments for COVID-19 could also play a role in reducing the mortality rate from the virus.
It is important to continue monitoring the situation and taking steps to control the spread of the virus to minimize the impact on mortality.
In August , Birny Birnbaum, the executive director of the Center for Economic Justice, asked the [NAIC] Market Regulation committee to train analysts to detect “dark patterns” and to define dark patterns as an unfair and deceptive trade practice.
The term “dark patterns” refers to techniques an online service can use to get consumers to do things they would otherwise not do, according to draft August meeting notes included in the committee’s fall national meeting packet.
Dark pattern techniques include nagging; efforts to keep users from understanding and comparing prices; obscuring important information; and the “roach motel” strategy, which makes signing up for an online service much easier than canceling it.
OpenAI inside Excel? How can you use an API key to connect to an AI model from Excel? This video shows you how. You can download the files from the GitHub link above. Wouldn’t it be great to have a search box in Excel you can use to ask any question? Like to create dummy data, create a formula or ask about the cast of the The Sopranos. And then artificial intelligence provides the information directly in Excel – without any copy and pasting! In this video you’ll learn how to setup an API connection from Microsoft Excel to Open AI’s ChatGPT (GPT-3) by using Office Scripts. As a bonus I’ll show you how you can parse the result if the answer from GPT-3 is in more than 1 line. This makes it easier to use the information in Excel.
By way of a few paragraphs inserted into the recently enacted 4,000-page 2023 National Defense Authorization Act, Congress mandated that state and local governments prepare their annual financial statements in a standardized format that is electronically searchable. The provision effectively drags state and local governments kicking and screaming into the 20th century, if not the 21st.
As worthy an accomplishment as this appears to be, it was resisted mightily by the state and local government financial community. Most prominently, they argue, the measure can potentially result in a major transfer of accounting and reporting regulatory authority from states to the federal government, thereby undercutting what many consider a fundamental principle of federalism. Moreover, state and local officials see it as one more costly unfunded mandate imposed upon their governments.
The opposition by state and local governments is understandable. But they have no one to blame but themselves. To this day they are wedded to a technological past. In a perverted way, they may be getting their just desserts. The act requires them to do little more than what they should have done years ago on their own for the benefit of their investors and other stakeholders.
Implicit in the act is that governments will have to prepare their financial statements using XBRL (eXtensible Business Reporting Language) or some comparable reporting framework. This is the format that the Securities and Exchange Commission, which would be charged with implementing the new provision, currently demands corporations use in their financial filings. XBRL requires all entities to classify each of the elements of their financial statements (e.g., assets, liability, revenues and expenses) by identical rules and in machine readable form.
This paper is an introduction to AI technology designed for actuaries to understand how the technology works, the potential risks it could introduce, and how to mitigate risks. The author focuses on data bias as it is one of the main concerns of facial recognition technology. This research project was jointly sponsored by the Diversity Equity and Inclusion Research and the Actuarial Innovation and Technology Strategic Research Programs
There are many books about spreadsheets out there. Most of these books will tell you things like “How to save a file” and “How to make a graph” and “How to compute the present value of a stream of cashflows” and “How to use conjoint analysis to figure out which features you should add to the next version of your company’s widgets in order to impress senior management and get a promotion and receive a pay raise so you can purchase a bigger boat than your neighbor has.”
This book isn’t about any of those. Instead, it’s about how to Think Spreadsheet. What does that mean? Well, spreadsheets lend themselves well to solving specific types of problems in specific types of ways. They lend themselves poorly to solving other specific types of problems in other specific types of ways.
Thinking Spreadsheet entails the following:
Understanding how spreadsheets work, what they do well, and what they don’t do well.
Using the spreadsheet’s structure to intelligently organize your data.
Solving problems using techniques that take advantage of the spreadsheet’s strengths.
Building spreadsheets that are easy to understand and difficult to break.
To help you learn how to Think Spreadsheet, I’ve collected a variety of curious and often whimsical examples. Some represent problems you are likely to encounter out in the wild, others problems you’ll never encounter outside of this book. Many of them we’ll solve multiple times. That’s because in each case, the means are more interesting than the ends. You’ll never (I hope) use a spreadsheet to compute all the prime numbers less than 100. But you’ll often (I hope) find useful the techniques we’ll use to compute those prime numbers, and if you’re clever you’ll go away and apply them to all sorts of real-world problems. As with most books of this sort, you’ll really learn the most if you recreate the examples yourself and play around with them, and I strongly encourage you to do so.
Author(s): Joel Grus
Publication Date: originally in dead-tree form 2010, accessed 29 Oct 2022
Professionalization leads us to an interesting dilemma. Actuarial culture and, for that matter, organizational culture got insurance companies to where they are today. If the culture were not moderately successful, then the company would not still exist. But this is where Prospect theory emerges from the shadows. It is human nature not to want to lose the culture that enabled your success. Many people nonetheless thirst for the gains earned by moving in a new direction. Risk aversion further reinforces the stickiness of culture, especially for risk-averse professions and industries. Drawing from author Tony Robbins, you cannot become who you want to be by staying who you currently are. Our professionalization, coupled with our risk aversion, creates a double whammy. Practices appropriate to prior eras have a propensity to be locked in place. Oh, but it gets worse!
By the nature of transformation and modernization, knowledge and know-how are embedded in the current people, processes and systems. The knowledge and know-how must be migrated from the prior technology to modern technology. Just like your computer’s hard drive gets fragmented, so too do firms’ expertise as people change focus, move jobs or leave companies. The long-dated nature of our promises can severely exacerbate the issue. Human knowledge and know-how are not very compressible, unlike biological seeds and eggs. In a time-consuming defragmenting exercise, information, knowledge and know-how must be painstakingly moved, relearned and adapted for the new system. This transformation requires new practices, further exacerbating the shock to the culture. Oh, but it gets even worse!
The transformation process requires existing teams to change, recombine or communicate in new ways. This means their cultures will potentially clash. Lack of trust and bureaucracy are the most significant frictions to collaboration among networks. The direct evidence of this is when project managers vent that teams x, y and z cannot seem to work together. It is because they do not have a reference system to know how to work together.
Before we get into the different approaches, why should you care about knowing multiple ways to calculate a distribution when we have a perfectly good symbolic formula that tells us the probability exactly?
As we shall soon see, having that formula gives us the illusion that we have the “exact” answer. We actually have to calculate the elements within. If you try calculating the binomial coefficients up front, you will notice they get very large, just as those powers of q get very small. In a system using floating point arithmetic, as Excel does, we may run into trouble with either underflow or overflow. Obviously, I picked a situation that would create just such troubles, by picking a somewhat large number of people and a somewhat low probability of death.
I am making no assumptions as to the specific use of the full distribution being made. It may be that one is attempting to calculate Value at Risk or Conditional Tail Expectation values. It may be that one is constructing stress scenarios. Most of the places where the following approximations fail are areas that are not necessarily of concern to actuaries, in general. In the following I will look at how each approximation behaves, and why one might choose that approach compared to others.
Unfortunately, fraud is rampant in the insurance industry. Property and casualty insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. ML can mitigate this issue by identifying potential claim situations early in the process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim.
5. Claims processing
Claims processing is notoriously arduous and time-consuming. ML technology is a tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.