This project aims to empower the actuarial profession with modern machine learning and AI tools. We provide comprehensive teaching materials that consist of lecture notes (technical document) building the theoretical foundation of this initiative. Each chapter of these lecture notes is supported by notebooks and slides which give teaching material, practical guidance and applied examples. Moreover, hands-on exercises in both R and Python are provided in additional notebooks.
Author(s): Mario V. Wüthrich, Ronald Richman, Benjamin Avanzi, Mathias Lindholm, Michael Mayer, Jürg Schelldorfer, Salvatore Scognamiglio
The Society of Actuaries (SOA) Research Institute’s Mortality and Longevity Strategic Research Program Steering Committee issued a call for essays to explore the application of artificial intelligence (AI) to mortality and longevity. The objective was to gather a variety of perspectives and experiences on the use of AI in mortality modeling, forecasting and prediction to promote discussion and future research around this topic.
The collection includes six essays that were accepted for publication from all submissions. Two essays were chosen for prizes based on their creativity, originality, and likelihood of further thought on the subject matter.
Author(s): multiple
Publication Date: September 2024
Publication Site: Society of Actuaries, SOA Research Institute
This paper describes the use and professionalism considerations for actuaries using generative artificial intelligence (GenAI) to provide actuarial services. GenAI generates text, quantitative, or image content based on training data, typically using a large language model (LLM). Examples of GenAI deployments include Open AI GPT, Google Gemini, Claude, and Meta. GenAI transforms information acquired from training data into entirely new content. In contrast, predictive AI models analyze historical quantitative data to forecast future outcomes, functioning like traditional predictive statistical models.
Actuaries have a wide range of understanding of AI. We assume the reader is broadly familiar with AI and AI model capabilities, but not necessarily a designer or expert user. In this paper, the terms “GenAI,” “AI,” “AI model(s),” and “AI tool(s)” are used interchangeably. This paper covers the professionalism fundamentals of using GenAI and only briefly discusses designing, building, and customizing GenAI systems. This paper focuses on actuaries using GenAI to support actuarial conclusions, not on minor incidental use of AI that duplicates the function of tools such as plug-ins, co-pilots, spreadsheets, internet search engines, or writing aids.
GenAI is a recent development, but the actuarial professionalism framework helps actuaries use GenAI appropriately: the Code of Professional Conduct, the Qualification Standards for Actuaries Issuing Statements of Actuarial Opinion in the United States (USQS), and the actuarial standards of practice (ASOPs). Although ASOP No. 23, Data Quality; No. 41, Actuarial Communications; and No. 56, Modeling, were developed before GenAI was widely available, each applies in situations when GenAI may now be used. The following discussion comments on these topics, focusing extensively on the application of ASOP No. 56, which provides guidance for actuaries when they are designing, developing, selecting, modifying, using, reviewing, or evaluating models. GenAI is a model; thus ASOP No. 56 applies.
The paper explores use cases and addresses conventional applications, including quantitative and qualitative analysis, as of mid-2024, rather than anticipating novel uses or combinations of applications. AI tools change quickly, so the paper focuses on principles rather than the technology. The scope of this paper does not include explaining how AI models are structured or function, nor does it offer specific guidelines on AI tools or use by the actuary in professional settings. Given the rapid rate of change within this space, the paper makes no predictions about the rapidly evolving technology, nor does it speculate on future challenges to professionalism.
Author(s): Committee on Professional Responsibility of the American Academy of Actuaries
Committee on Professional Responsibility Geoffrey C. Sandler, Chairperson Brian Donovan Richard Goehring Laura Maxwell Shawn Parks Matthew Wininger Kathleen Wong Yukki Yeung Paul Zeisler Melissa Zrelack
Artificial Intelligence Task Force Prem Boinpally Laura Maxwell Shawn Parks Fei Wang Matt Wininger Kathy Wong Yukki Yeung
Here are several examples of ChatBots and other AI applications for actuaries to try.
Answers that you might get from a general AI LLM such as ChatGPT may or may not correctly represent the latest thinking in actuarial science. These chatBots make an effort to educate the LLM with actuarial or other pertinent literature so that you can get better informed answers.
But, you need to be a critical user. Please be careful with the responses that you get from these ChatBots and let us know if you find any issues. This is still early days for the use of AI in actuarial practice and we need to learn from our experiences and move forward.
Note from meep: there are multiple Apps/Bots linked from the main site.
The NAIC has developed its definition of AI, and the insurance industry has responded with information in accordance with that definition. Any definition developed by Treasury should align with, or at a minimum not conflict with, definitions of AI in existing regulatory frameworks for financial institutions.
The Treasury definition of AI should reflect the following: o Definitions should be tailored to the different types of AI and the use cases and risks they pose. The definition used in this RFI is similar to an outdated definition put forth by the Organization for Economic Coordination and Development (OECD), which could be narrowed for specific use cases (e.g., tiering of risks under the EU framework). o There are also distinctions between generative AI used to make decisions, without ultimately including human input or intervention, and AI used with human decisionmaking being absolute or the usage being solely for internal efficiencies and therefore not impactful for customers. o AI covers a broad range of predictive modeling techniques that would otherwise not be considered Artificial Intelligence. A refinement to the definition that classifies AI as machine learning systems that utilize artificial neural networks to make predictions may be more appropriate. o The definition of AI should exclude simpler computation tasks that companies have been using for a long time.
The U.S. Department of the Treasury (Treasury) is seeking comment through this request for information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence (AI) within the financial sector. Treasury is interested in gathering information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.
DATES:
Written comments and information are requested on or before August 12, 2024.
….
Oversight of AI—Explainability and Bias
The rapid development of emerging AI technologies has created challenges for financial institutions in the oversight of AI. Financial institutions may have an incomplete understanding of where the data used to train certain AI models and tools was acquired and what the data contains, as well as how the algorithms or structures are developed for those AI models and tools. For instance, machine-learning algorithms that internalize data based on relationships that are not easily mapped and understood by financial institution users create questions and concerns regarding explainability, which could lead to difficulty in assessing the conceptual soundness of such AI models and tools.[22]
Financial regulators have issued guidance on model risk management principles, encouraging financial institutions to effectively identify and mitigate risks associated with model development, model use, model validation (including validation of vendor and third-party models), ongoing monitoring, outcome analysis, and model governance and controls.[23] These principles are technology-agnostic but may not be applicable to certain AI models and tools. Due to their inherent complexity, however, AI models and tools may exacerbate certain risks that may warrant further scrutiny and risk mitigation measures. This is particularly true in relation to the use of emerging AI technologies.
Furthermore, the rapid development of emerging AI technologies may create a human capital shortage in financial institutions, where sufficient knowledge about a potential risk or bias of those AI technologies may be lacking such that staff may not be able to effectively manage the development, validation, and application of those AI technologies. Some financial institutions may rely on third-party providers to develop and validate AI models and tools, which may also create challenges in ensuring alignment with relevant risk management guidance.
Challenges in explaining AI-assisted or AI-generated decisions also create questions about transparency generally, and raise concerns about the potential obfuscation of model bias that can negatively affect impacted entities. In the Non-Bank Report, Treasury noted the potential for AI models to perpetuate discrimination by utilizing and learning from data that reflect and reinforce historical biases.[24] These challenges of managing explainability and bias may impede the adoption and use of AI by financial institutions.
RGA decided to put ChatGPT to the test by posing three questions related to underwriting and the future of life insurance. The answers from ChatGPT that follow are entirely AI-generated. Following those responses is a brief FAQ that further explores this emerging topic.
How will COVID-19 affect U.S. long-term mortality?
It is difficult to predict the long-term effects of COVID-19 on mortality in the United States. The pandemic has already had a significant impact on mortality, with over 1.1 million COVID-attributed deaths reported in the country. However, it is possible that the mortality rate from COVID-19 could decline over time as more people become immune to the virus and as effective vaccines are developed and distributed.
There are also other factors that could affect the long-term mortality rate from COVID-19. For example, the effectiveness of public health measures, such as mask-wearing and social distancing, in controlling the spread of the virus could have a significant impact on the number of deaths. In addition, the availability of effective treatments for COVID-19 could also play a role in reducing the mortality rate from the virus.
It is important to continue monitoring the situation and taking steps to control the spread of the virus to minimize the impact on mortality.
OpenAI inside Excel? How can you use an API key to connect to an AI model from Excel? This video shows you how. You can download the files from the GitHub link above. Wouldn’t it be great to have a search box in Excel you can use to ask any question? Like to create dummy data, create a formula or ask about the cast of the The Sopranos. And then artificial intelligence provides the information directly in Excel – without any copy and pasting! In this video you’ll learn how to setup an API connection from Microsoft Excel to Open AI’s ChatGPT (GPT-3) by using Office Scripts. As a bonus I’ll show you how you can parse the result if the answer from GPT-3 is in more than 1 line. This makes it easier to use the information in Excel.
This paper is an introduction to AI technology designed for actuaries to understand how the technology works, the potential risks it could introduce, and how to mitigate risks. The author focuses on data bias as it is one of the main concerns of facial recognition technology. This research project was jointly sponsored by the Diversity Equity and Inclusion Research and the Actuarial Innovation and Technology Strategic Research Programs
Today’s report on AI of retinal vessel images to help predict the risk of heart attack and stroke, from over 65,000 UK Biobank participants, reinforces a growing body of evidence that deep neural networks can be trained to “interpret” medical images far beyond what was anticipated. Add that finding to last week’s multinational study of deep learning of retinal photos to detect Alzheimer’s disease with good accuracy. In this post I am going to briefly review what has already been gleaned from 2 classic medical images—the retina and the electrocardiogram (ECG)—as representative for the exciting capability of machine vision to “see” well beyond human limits. Obviously, machines aren’t really seeing or interpreting and don’t have eyes in the human sense, but they sure can be trained from hundreds of thousand (or millions) of images to come up with outputs that are extraordinary. I hope when you’ve read this you’ll agree this is a particularly striking advance, which has not yet been actualized in medical practice, but has enormous potential.
Author(s): Eric Topol
Publication Date: 4 Oct 2022
Publication Site: Eric Topol’s substack, Ground Truths