Impact of AI on Mortality – Essay Collection

Link: https://www.soa.org/resources/research-reports/2024/ai-mortality-essay-collection/

PDF: https://www.soa.org/4a5e85/globalassets/assets/files/resources/research-report/2024/impact-ai-mortality/2024-impact-ai-mort-essays.pdf

Graphic:

Excerpt:

The Society of Actuaries (SOA) Research Institute’s Mortality and Longevity Strategic Research Program Steering Committee issued a call for essays to explore the application of artificial intelligence (AI) to mortality and longevity. The objective was to gather a variety of perspectives and experiences on the use of AI in mortality modeling, forecasting and prediction to promote discussion and future research around this topic.


The collection includes six essays that were accepted for publication from all submissions. Two essays were chosen for prizes based on their creativity, originality, and likelihood of further thought on the subject matter.

Author(s): multiple

Publication Date: September 2024

Publication Site: Society of Actuaries, SOA Research Institute

Actuarial Professionalism Considerations for Generative AI

Link: https://www.actuary.org/sites/default/files/2024-09/professionalism-paper-generative-ai.pdf

Graphic:

Excerpt:

This paper describes the use and professionalism considerations for actuaries using
generative artificial intelligence (GenAI) to provide actuarial services. GenAI generates text,
quantitative, or image content based on training data, typically using a large language model
(LLM). Examples of GenAI deployments include Open AI GPT, Google Gemini, Claude,
and Meta. GenAI transforms information acquired from training data into entirely new
content. In contrast, predictive AI models analyze historical quantitative data to forecast
future outcomes, functioning like traditional predictive statistical models.


Actuaries have a wide range of understanding of AI. We assume the reader is broadly
familiar with AI and AI model capabilities, but not necessarily a designer or expert user. In
this paper, the terms “GenAI,” “AI,” “AI model(s),” and “AI tool(s)” are used interchangeably.
This paper covers the professionalism fundamentals of using GenAI and only briefly
discusses designing, building, and customizing GenAI systems. This paper focuses on
actuaries using GenAI to support actuarial conclusions, not on minor incidental use of AI
that duplicates the function of tools such as plug-ins, co-pilots, spreadsheets, internet search
engines, or writing aids.


GenAI is a recent development, but the actuarial professionalism framework helps actuaries
use GenAI appropriately: the Code of Professional Conduct, the Qualification Standards
for Actuaries Issuing Statements of Actuarial Opinion in the United States (USQS), and the
actuarial standards of practice (ASOPs). Although ASOP No. 23, Data Quality; No. 41,
Actuarial Communications; and No. 56, Modeling, were developed before GenAI was widely
available, each applies in situations when GenAI may now be used. The following discussion
comments on these topics, focusing extensively on the application of ASOP No. 56, which
provides guidance for actuaries when they are designing, developing, selecting, modifying,
using, reviewing, or evaluating models. GenAI is a model; thus ASOP No. 56 applies.


The paper explores use cases and addresses conventional applications, including quantitative
and qualitative analysis, as of mid-2024, rather than anticipating novel uses or combinations
of applications. AI tools change quickly, so the paper focuses on principles rather than
the technology. The scope of this paper does not include explaining how AI models are
structured or function, nor does it offer specific guidelines on AI tools or use by the actuary
in professional settings. Given the rapid rate of change within this space, the paper makes no
predictions about the rapidly evolving technology, nor does it speculate on future challenges
to professionalism.

Author(s): Committee on Professional Responsibility of the American Academy of Actuaries

Committee on Professional
Responsibility
Geoffrey C. Sandler, Chairperson
Brian Donovan
Richard Goehring
Laura Maxwell
Shawn Parks
Matthew Wininger
Kathleen Wong
Yukki Yeung
Paul Zeisler
Melissa Zrelack

Artificial Intelligence Task Force
Prem Boinpally
Laura Maxwell
Shawn Parks
Fei Wang
Matt Wininger
Kathy Wong
Yukki Yeung

Publication Date: September 2024

Publication Site: American Academy of Actuaries

Actuarial ChatBots

Link: https://riskviews.wordpress.com/actuarial-chatbots/

Graphic:

Excerpt:

Here are several examples of ChatBots and other AI applications for actuaries to try.

Answers that you might get from a general AI LLM such as ChatGPT may or may not correctly represent the latest thinking in actuarial science. These chatBots make an effort to educate the LLM with actuarial or other pertinent literature so that you can get better informed answers.

But, you need to be a critical user. Please be careful with the responses that you get from these ChatBots and let us know if you find any issues. This is still early days for the use of AI in actuarial practice and we need to learn from our experiences and move forward.

Note from meep: there are multiple Apps/Bots linked from the main site.

Author(s): David Ingram

Publication Date: accessed 28 Aug 2024

Publication Site: Risk Views

Comments on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

Link: https://www.regulations.gov/document/TREAS-DO-2024-0011-0001/comment

Description:

Publicly available comments on Dept of Treasury’s request for information on AI use, opportunities & risk in financial services sector.

Example: https://www.regulations.gov/comment/TREAS-DO-2024-0011-0010 — comment from ACLI

The NAIC has developed its definition of AI, and the insurance industry has responded with
information in accordance with that definition. Any definition developed by Treasury should
align with, or at a minimum not conflict with, definitions of AI in existing regulatory
frameworks for financial institutions.

The Treasury definition of AI should reflect the following:
o Definitions should be tailored to the different types of AI and the use cases and
risks they pose. The definition used in this RFI is similar to an outdated definition put
forth by the Organization for Economic Coordination and Development (OECD),
which could be narrowed for specific use cases (e.g., tiering of risks under the EU
framework).
o There are also distinctions between generative AI used to make decisions, without
ultimately including human input or intervention, and AI used with human decisionmaking being absolute or the usage being solely for internal efficiencies and
therefore not impactful for customers.
o AI covers a broad range of predictive modeling techniques that would otherwise not
be considered Artificial Intelligence. A refinement to the definition that classifies AI
as machine learning systems that utilize artificial neural networks to make
predictions may be more appropriate.
o The definition of AI should exclude simpler computation tasks that companies have
been using for a long time.

Author(s): Various

Publication Date: accessed 9 Aug 2024

Publication Site: Regulations.gov

Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

Link: https://www.federalregister.gov/documents/2024/06/12/2024-12336/request-for-information-on-uses-opportunities-and-risks-of-artificial-intelligence-in-the-financial

Excerpt:

SUMMARY:

The U.S. Department of the Treasury (Treasury) is seeking comment through this request for information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence (AI) within the financial sector. Treasury is interested in gathering information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.

DATES:

Written comments and information are requested on or before August 12, 2024.

….

Oversight of AI—Explainability and Bias

The rapid development of emerging AI technologies has created challenges for financial institutions in the oversight of AI. Financial institutions may have an incomplete understanding of where the data used to train certain AI models and tools was acquired and what the data contains, as well as how the algorithms or structures are developed for those AI models and tools. For instance, machine-learning algorithms that internalize data based on relationships that are not easily mapped and understood by financial institution users create questions and concerns regarding explainability, which could lead to difficulty in assessing the conceptual soundness of such AI models and tools.[22]

Financial regulators have issued guidance on model risk management principles, encouraging financial institutions to effectively identify and mitigate risks associated with model development, model use, model validation (including validation of vendor and third-party models), ongoing monitoring, outcome analysis, and model governance and controls.[23] These principles are technology-agnostic but may not be applicable to certain AI models and tools. Due to their inherent complexity, however, AI models and tools may exacerbate certain risks that may warrant further scrutiny and risk mitigation measures. This is particularly true in relation to the use of emerging AI technologies.

Furthermore, the rapid development of emerging AI technologies may create a human capital shortage in financial institutions, where sufficient knowledge about a potential risk or bias of those AI technologies may be lacking such that staff may not be able to effectively manage the development, validation, and application of those AI technologies. Some financial institutions may rely on third-party providers to develop and validate AI models and tools, which may also create challenges in ensuring alignment with relevant risk management guidance.

Challenges in explaining AI-assisted or AI-generated decisions also create questions about transparency generally, and raise concerns about the potential obfuscation of model bias that can negatively affect impacted entities. In the Non-Bank Report, Treasury noted the potential for AI models to perpetuate discrimination by utilizing and learning from data that reflect and reinforce historical biases.[24] These challenges of managing explainability and bias may impede the adoption and use of AI by financial institutions.

Author(s): Department of the Treasury.

Publication Date: 6/12/2024

Publication Site: Federal Register

ChatGPT: A conversation about underwriting and life insurance

Link: https://www.dig-in.com/opinion/chatgpt-a-conversation-about-underwriting-life-insurance

Excerpt:

RGA decided to put ChatGPT to the test by posing three questions related to underwriting and the future of life insurance. The answers from ChatGPT that follow are entirely AI-generated. Following those responses is a brief FAQ that further explores this emerging topic.

How will COVID-19 affect U.S. long-term mortality?

It is difficult to predict the long-term effects of COVID-19 on mortality in the United States. The pandemic has already had a significant impact on mortality, with over 1.1 million COVID-attributed deaths reported in the country. However, it is possible that the mortality rate from COVID-19 could decline over time as more people become immune to the virus and as effective vaccines are developed and distributed.

There are also other factors that could affect the long-term mortality rate from COVID-19. For example, the effectiveness of public health measures, such as mask-wearing and social distancing, in controlling the spread of the virus could have a significant impact on the number of deaths. In addition, the availability of effective treatments for COVID-19 could also play a role in reducing the mortality rate from the virus.

It is important to continue monitoring the situation and taking steps to control the spread of the virus to minimize the impact on mortality.

Author(s): ChatGPT, Jeff Heaton, Dave Rengachary

Publication Date: 14 Feb 2023

Publication Site: Digital Insurance

Bring ChatGPT INSIDE Excel to Solve ANY Problem Lightning FAST

Link: https://www.youtube.com/watch?v=kQPUWryXwag&ab_channel=LeilaGharani

Video:

Description:

OpenAI inside Excel? How can you use an API key to connect to an AI model from Excel? This video shows you how. You can download the files from the GitHub link above. Wouldn’t it be great to have a search box in Excel you can use to ask any question? Like to create dummy data, create a formula or ask about the cast of the The Sopranos. And then artificial intelligence provides the information directly in Excel – without any copy and pasting! In this video you’ll learn how to setup an API connection from Microsoft Excel to Open AI’s ChatGPT (GPT-3) by using Office Scripts. As a bonus I’ll show you how you can parse the result if the answer from GPT-3 is in more than 1 line. This makes it easier to use the information in Excel.

Author(s): Leila Gharani

Publication Date: 6 Feb 2023

Publication Site: Youtube

Data Challenges in Building a Facial Recognition Model and How to Mitigate Them

Link: https://www.soa.org/resources/research-reports/2023/data-facial-rec/

PDF: https://www.soa.org/49022b/globalassets/assets/files/resources/research-report/2023/dei107-facial-recognition-challenges.pdf

Graphic:

Excerpt:

This paper is an introduction to AI technology designed for actuaries to understand how the technology works, the potential risks it could introduce, and how to mitigate risks. The author focuses on data bias as it is one of the main concerns of facial recognition technology. This research project was jointly sponsored by the Diversity Equity and Inclusion Research and the Actuarial Innovation and Technology Strategic Research Programs

Author(s): Victoria Zhang, FSA, FCIA

Publication Date: Jan 2023

Publication Site: SOA Research Institute

The amazing power of “machine eyes”

Link: https://erictopol.substack.com/p/the-amazing-power-of-machine-eyes

Graphic:

Excerpt:

Today’s report on AI of retinal vessel images to help predict the risk of heart attack and stroke, from over 65,000 UK Biobank participants, reinforces a growing body of evidence that deep neural networks can be trained to “interpret” medical images far beyond what was anticipated. Add that finding to last week’s multinational study of deep learning of retinal photos to detect Alzheimer’s disease with good accuracy. In this post I am going to briefly review what has already been gleaned from 2 classic medical images—the retina and the electrocardiogram (ECG)—as representative for the exciting capability of machine vision to “see” well beyond human limits. Obviously, machines aren’t really seeing or interpreting and don’t have eyes in the human sense, but they sure can be trained from hundreds of thousand (or millions) of images to come up with outputs that are extraordinary. I hope when you’ve read this you’ll agree this is a particularly striking advance, which has not yet been actualized in medical practice, but has enormous potential.

Author(s): Eric Topol

Publication Date: 4 Oct 2022

Publication Site: Eric Topol’s substack, Ground Truths

5 insurance use cases for machine learning

Link: https://www.dig-in.com/opinion/5-use-cases-for-machine-learning-in-the-insurance-industry

Excerpt:

4. Fraud detection

Unfortunately, fraud is rampant in the insurance industry. Property and casualty insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. ML can mitigate this issue by identifying potential claim situations early in the process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim. 

5. Claims processing

Claims processing is notoriously arduous and time-consuming. ML technology is a tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.

Author(s): Lisa Rosenblate

Publication Date: 9 Sept 2022

Publication Site: Digital Insurance

How AI and federal funding can revolutionize city budgets

Link: https://lizfarmer.substack.com/p/ai-federal-funding-city-budgets

Graphic:

Excerpt:

The big takeaway from the GFOA’s Rethinking Revenue project is that the modern economy is shifting the tax burden toward those who can least afford it. Now, the association and its partners are launching pilot programs to test some of the ideas the project has explored.

One will target the inequities built into relying on fees and fines and the GFOA is inviting governments to apply for a pilot project testing segmented pricing as a potential solution. Instead of a one-size-fits-all fine, segmented pricing is designed around a user’s ability or willingness to pay. For example, a $100 speeding ticket for someone who earns just $500 a week is a much larger financial burden than it is for someone who earns $2,000 a week. So for the lower-income transgressor, the fine is lowered to $50. It still stings, but it’s much more likely to get paid.

Shane Kavanagh, GFOA’s senior manager of research, said they’re looking for around five places to test this idea and that the tested revenue source would have to be large enough (such as traffic fines) and also be one that the government has had difficulty collecting.

Author(s): Liz Farmer

Publication Date: 24 Feb 2022

Publication Site: Long Story Short

Emerging Technologies and their Impact on Actuarial Science

Link: https://www.soa.org/globalassets/assets/files/resources/research-report/2021/2021-emerging-technologies-report.pdf

Graphic:

Excerpt:

This research evaluates the current state and future outlook of emerging technologies on the actuarial profession
over a three-year horizon. For the purpose of this report, a technology is considered to be a practical application of
knowledge (as opposed to a specific vendor) and is considered emerging when the use of the particular technology
is not already widespread across the actuarial profession. This report looks to evaluate prospective tools that
actuaries can use across all aspects and domains of work spanning Life and Annuities, Health, P&C, and Pensions in
relation to insurance risk.
We researched and grouped similar technologies together for ease of reading and understanding. As a result, we
identified the six following technology groups:

  1. Machine Learning and Artificial Intelligence
  2. Business Intelligence Tools and Report Generators
  3. Extract-Transform-Load (ETL) / Data Integration and Low-Code Automation Platforms
  4. Collaboration and Connected Data
  5. Data Governance and Sharing
  6. Digital Process Discovery (Process Mining / Task Mining)

Author(s):

Nicole Cervi, Deloitte
Arthur da Silva, FSA, ACIA, Deloitte
Paul Downes, FIA, FCIA, Deloitte
Marwah Khalid, Deloitte
Chenyi Liu, Deloitte
Prakash Rajgopal, Deloitte
Jean-Yves Rioux, FSA, CERA, FCIA, Deloitte
Thomas Smith, Deloitte
Yvonne Zhang, FSA, FCIA, Deloitte

Publication Date: October 2021

Publication Site: Society of Actuaries, SOA Research Institute