How persuasive chatbots might be used in insurance

Graphic:

Excerpt:

Individuals have a different kind of relationship with insurance than what they have with any other product or service. Though being the most effective risk mitigation tool, it still requires a hard push from insurers and regulators to make people purchase. The thought of insurance could evoke every other emotion except joy in an individual. The main reason for this is that insurance is a futuristic promise that assures compensation when a covered risk event happens. This operates exactly opposite to the strong impulse of scarcity and immediacy bias.

As in any other industry, the persuadable events in insurance could be based on reactive or proactive triggers to encourage positive or discourage negative events. Depending on the intelligence ingrained in the back-end systems and the extent customer data is consolidated, the proactive persuasion events could be personalized to a customer and not just limited to generalized promotion of a new product or program. It could be performed for other persuadable events of the same policy for which the chat is in progress or expand to include policy events from other policies of the customer.

An indicative list of the persuadable events in an insurance policy could be categorized as given in Table 2.

Author(s): Srivathsan Karanai Margan

Publication Date: September/October 2021

Publication Site: Contingencies

Excel autocorrect errors still plague genetic research

Link: https://cosmosmagazine.com/science/biology/excel-autocorrect-errors-still-plague-genetic-research/

Graphic:

Excerpt:

Earlier this year we repeated our analysis. This time we expanded it to cover a wider selection of open access journals, anticipating researchers and journals would be taking steps to prevent such errors appearing in their supplementary data files.

We were shocked to find in the period 2014 to 2020 that 3,436 articles, around 31% of our sample, contained gene name errors. It seems the problem has not gone away, and is actually getting worse.

Author(s): Mark Ziemann, Deakin University and Mandhri Abeysooriya, Deakin University

Publication Date: 27 August 2021

Publication Site: Cosmos magazine

Insurance Futures: Global trends and issues reshaping the insurance landscape to 2035

Link: https://www.internationalinsurance.org/sites/default/files/2021-07/Milliman_Insurance_futures_compilation_7.2021_2.pdf

Graphic:

Excerpt:

The long-term security of coastal regions depends not simply on climate, oceans and geography,
but on multiple local factors, from the politics of foreign aid and investor confidence, to the
quality of resilience-oriented designs and ‘managed retreat’.


Take some examples. In 2017, the drought in Cape Town and lack of resilient water infrastructure
led to a downgrade by Moody’s. Wildfires in the Trinity Public Utilities District in California
led to similar downgrades in 2019. Moody’s have developed a ‘heat map’3 that shows the credit
exposure to environmental risk across sectors representing US$74.6 trillion in debt. In the short
term, the unregulated utilities and power companies are exposed to ‘elevated risk’. The risks to
automobile manufacturers, oil and gas independents and transport companies are growing.
Blackrock’s report from April 2019, focused primarily on physical climate risk, showed that
securities backed by commercial real estate mortgages could be confronted with losses of up
to 3.8 per cent due to storm and flood related cash flow shortages.4 Climate change has already
reduced local GDP, with Miami top of the list. The report was amongst the first to link high-level
climate risk to location analysis of assets such as plants, property and equipment.


In other words, adaptation and resilience options are also uniquely local. The outcomes hinge
on mapping long-term interdependencies to predict physical world changes and explore how
core economic and social systems transition to a sustainable world.

Publication Date: July 2021

Publication Site: International Insurance Society

Intro to Financial Modelling – Part 19: Wrap-up

Link: https://www.icaew.com/technical/technology/excel/excel-community/excel-community-articles/2021/intro-to-financial-modelling-part-19

Graphic:

Excerpt:

There has been significant disruption in how organisations conduct business and the way we work over the past year and a half. However, financial modellers and developers have had to continue to build, refine and test their models throughout these unprecedented times. Figure 1 below summarises the areas we have covered in the blog series and how they fit together to form the practical guidance of how to follow and implement the Financial Modelling Code.

Author(s): Andrew Paw

Publication Date: 19 August 2021

Publication Site: ICAEW

The Life Modeling Problem: A Comparison of Julia, Rust, Python, and R

Link: https://juliaactuary.org/blog/life-modeling-problem/

Graphic:

Excerpt:

All of the submissions and algorithms above worked, and fast enough that it gave an answer in very little time. And much of the time, the volume of data to process is small enough that it doesn’t matter.

But remember the CUNA Mutual example from above: Let’s say that CUNA’s runtime is already as fast as it can be, and index it to the fastest result in the benchmarks below. The difference between the fastest “couple of days” run and the slowest would be over 721 years. So it’s important to use tools and approaches that are performant for actuarial work.

So for little one-off tasks it doesn’t make a big difference what tool or algorthim is used. More often than not, your one-off calculatons or checks will be done fast enough that it’s not important to be picky. But if wanting to scale your work to a broader application within your company or the industry, I think it’s important to be perfromance-minded[4].

Author(s): Alec Loudenback

Publication Date: 16 May 2021

Publication Site: JuliaActuary

Julia for Actuaries

Link: https://juliaactuary.org/blog/julia-actuaries/

Excerpt:

Looking at other great tools like R and Python, it can be difficult to summarize a single reason to motivate a switch to Julia, but hopefully this article piqued an interest to try it for your next project.

That said, Julia shouldn’t be the only tool in your tool-kit. SQL will remain an important way to interact with databases. R and Python aren’t going anywhere in the short term and will always offer a different perspective on things!

In an earlier article, I talked about becoming a 10x Actuary which meant being proficient in the language of computers so that you could build and implement great things. In a large way, the choice of tools and paradigms shape your focus. Productivity is one aspect, expressiveness is another, speed one more. There are many reasons to think about what tools you use and trying out different ones is probably the best way to find what works best for you.

It is said that you cannot fully conceptualize something unless your language has a word for it. Similar to spoken language, you may find that breaking out of spreadsheet coordinates (and even a dataframe-centric view of the world) reveals different questions to ask and enables innovated ways to solve problems. In this way, you reward your intellect while building more meaningful and relevant models and analysis.

Author(s): Alec Loudenback

Publication Date: 9 July 2020

Publication Site: JuliaActuary

Autocorrect errors in Excel still creating genomics headache

Link: https://www.nature.com/articles/d41586-021-02211-4

Graphic:

Excerpt:

In 2016, Mark Ziemann and his colleagues at the Baker IDI Heart and Diabetes Institute in Melbourne, Australia, quantified the problem. They found that one-fifth of papers in top genomics journals contained gene-name conversion errors in Excel spreadsheets published as supplementary data2. These data sets are frequently accessed and used by other geneticists, so errors can perpetuate and distort further analyses.

However, despite the issue being brought to the attention of researchers — and steps being taken to fix it — the problem is still rife, according to an updated and larger analysis led by Ziemann, now at Deakin University in Geelong, Australia3. His team found that almost one-third of more than 11,000 articles with supplementary Excel gene lists published between 2014 and 2020 contained gene-name errors (see ‘A growing problem’).

Simple checks can detect autocorrect errors, says Ziemann, who researches computational reproducibility in genetics. But without those checks, the errors can easily go unnoticed because of the volume of data in spreadsheets.

Author(s): Dyani Lewis

Publication Date: 13 August 2021

Publication Site: nature

Virtual Meetup: To Err is Human but to ISERR is Never OK!

Video description:

Have you ever built a perfect financial model without any errors? Thought not! And for that reason, all good modellers know they need to include some error checks. But what is not as clear is how many error checks you should have, when you should include them and what form they should take. Excel “helpfully” provided us with functions like ISERR, ISERROR and IFERROR but as you progress your modelling journey you should learn to avoid these functions. Plus, you also learn the sad truth that Excel can’t even do basic maths sometimes! Join us to hear from financial modelling specialist Andrew Berg, who has spent years building models, and so happily admits he has probably already made most of the mistakes you haven’t yet had a chance to! The good news is that he is willing to share the tips he has learned about the right types of error checks to add to your models so you don’t have to learn the hard way. ★Download the resources here ► https://plumsolutions.com.au/virtual-… ★Register for more meetups like this ► https://plumsolutions.com.au/meetup/ ★Connect with Andrew on Linkedin ► https://www.linkedin.com/in/andrew-be…

Author(s): Andrew Berg, Danielle Stein Fairhurst

Publication Date: 2 June 2021

Publication Site: YouTube

How the government’s mistaken prices disclosure derailed a big follow-on solicitation

Link: https://federalnewsnetwork.com/contracting/2021/07/how-the-governments-mistaken-prices-disclosure-derailed-a-big-follow-on-solicitation/

Excerpt:

When the Defense Information Systems Agency sought a new satellite services acquisition on behalf of the Navy, it included a spreadsheet so bidders could fill in their prices. But the spreadsheet included the prices from the current contract, which were supposed to be inaccessible. For how things turned out, Smith Pachter McWhorter procurement attorney Joe Petrillo joined Federal Drive with Tom Temin.

…..

Joe Petrillo: Sure. This is another excel spreadsheet disaster, and we talked about one a few weeks ago. It involved an acquisition of satellite telecom services for the Navy’s Military Sealift Command. It was an acquisition of commercial satellite telecommunications services. And they were divided into both bandwidth and non-bandwidth services. And the contract would be able to run to for up to 10 years in duration. Part of the contract, as you said, was an excel spreadsheet of the various different line items with blanks for offers to include their price. Unfortunately, this spreadsheet had hidden tabs, 19 hidden tabs, and those included, among other things, historical pricing information from the current contract. So Inmarsat, which was the incumbent contractor, holding that contract, notified the government and said, look you’ve disclosed our pricing information, do something about it. So the government deleted the offending spreadsheet from the SAM.gov website. But they understood and this was the case, third party aggregators had already downloaded it, and it was out there, it was available.

Author(s): Tom Temin, Joe Petrillo

Publication Date: 8 July 2021

Publication Site: Federal News Network

Have Fun With Approximations!

Link: https://www.linkedin.com/pulse/have-fun-approximations-mary-pat-campbell/

Graphic:

Pdf: https://drive.google.com/file/d/0ByabEDuWaN6FNmZhTDBYeEVrNVE/view?resourcekey=0-U4GI2_9zn4UQdWza1bq95w

Excerpt:

In the pre-computer days, people used these approximations due to having to do all calculations by hand or with the help of tables. Of course, many approximations are done by computers themselves — the way computers calculate functions such as sine() and exp() involves approaches like Taylor series expansions.

The specific approximation techniques I try (1 “exact” and 6 different approximation… including the final ones where I put approximations within approximations just because I can) are not important. But the concept that you should know how to try out and test approximation approaches in case you need them is important for those doing numerical computing.

Author(s): Mary Pat Campbell

Publication Date: 3 February 2016 (updated for links 2021)

Publication Site: LinkedIn, CompAct, Society of Actuaries

Several Ways to Improve Your Use of Excel for Actuarial Production

Link: https://www.soa.org/sections/small-insurance/small-insurance-newsletter/2021/june/stn-2021-06-mathys/

Graphic:

Excerpt:

Create a Consistent Structure for Calculations

When spreadsheets are created ad-hoc, the usage of time steps tends to be inconsistent: advancing by rows in one sheet, columns in another, and even a mix of the two in the same sheet. Sometimes steps will be weeks, other times months, quarters, or years. This is confusing for users and reviewers, leads to low trust, increases the time for updates and audits, and adds to the risks of the spreadsheet.

A better way is to make all calculations follow a consistent layout, either across rows or columns, and use that layout for all calculations, regardless if it requires a few more rows or columns. For example, one way to make calculations consistent is with time steps going across the columns and each individual calculation going down the rows:

Author(s): Stephan Mathys

Publication Date: June 2021

Publication Site: Small Talk at the Society of Actuaries

The tyranny of spreadsheets

Link: https://financialpost.com/fp-work/the-tyranny-of-spreadsheets-we-take-numbers-for-granted-until-we-run-out-of-them

Excerpt:

Somewhere in PHE’s data pipeline, someone had used the wrong Excel file format, XLS rather than the more recent XLSX. And XLS spreadsheets simply don’t have that many rows: 2 to the power of 16, about 64,000. This meant that during some automated process, cases had vanished off the bottom of the spreadsheet, and nobody had noticed.

The idea of simply running out of space to put the numbers was darkly amusing. A few weeks after the data-loss scandal, I found myself able to ask Bill Gates himself about what had happened. Gates no longer runs Microsoft, and I was interviewing him about vaccines for a BBC program called How to Vaccinate The World. But the opportunity to have a bit of fun quizzing him about XLS and XLSX was too good to pass up.

I expressed the question in the nerdiest way possible, and Gates’s response was so strait-laced I had to smile: “I guess… they overran the 64,000 limit, which is not there in the new format, so…” Well, indeed. Gates then added, “It’s good to have people double-check things, and I’m sorry that happened.”

Exactly how the outdated XLS format came to be used is unclear. PHE sent me an explanation, but it was rather vague. I didn’t understand it, so I showed it to some members of Eusprig, the European Spreadsheet Risks Group. They spend their lives analyzing what happens when spreadsheets go rogue. They’re my kind of people. But they didn’t understand what PHE had told me, either. It was all a little light on detail.

Author(s): Tim Harford

Publication Date: 29 June 2021

Publication Site: Financial Post