Actuarial ChatBots

Link: https://riskviews.wordpress.com/actuarial-chatbots/

Graphic:

Excerpt:

Here are several examples of ChatBots and other AI applications for actuaries to try.

Answers that you might get from a general AI LLM such as ChatGPT may or may not correctly represent the latest thinking in actuarial science. These chatBots make an effort to educate the LLM with actuarial or other pertinent literature so that you can get better informed answers.

But, you need to be a critical user. Please be careful with the responses that you get from these ChatBots and let us know if you find any issues. This is still early days for the use of AI in actuarial practice and we need to learn from our experiences and move forward.

Note from meep: there are multiple Apps/Bots linked from the main site.

Author(s): David Ingram

Publication Date: accessed 28 Aug 2024

Publication Site: Risk Views

Large language models propagate race-based medicine

Link: https://www.nature.com/articles/s41746-023-00939-z

Graphic:

For each question and each model, the rating represents the number of runs (out of 5 total runs) that had concerning race-based responses. Red correlates with a higher number of concerning race-based responses.

Abstract:

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.

Author(s):Jesutofunmi A. Omiye, Jenna C. Lester, Simon Spichak, Veronica Rotemberg & Roxana Daneshjou

Publication Date: 20 Oct 2023

Publication Site: npj Digital Medicine