Julia for Actuaries

Link: https://juliaactuary.org/blog/julia-actuaries/

Excerpt:

Looking at other great tools like R and Python, it can be difficult to summarize a single reason to motivate a switch to Julia, but hopefully this article piqued an interest to try it for your next project.

That said, Julia shouldn’t be the only tool in your tool-kit. SQL will remain an important way to interact with databases. R and Python aren’t going anywhere in the short term and will always offer a different perspective on things!

In an earlier article, I talked about becoming a 10x Actuary which meant being proficient in the language of computers so that you could build and implement great things. In a large way, the choice of tools and paradigms shape your focus. Productivity is one aspect, expressiveness is another, speed one more. There are many reasons to think about what tools you use and trying out different ones is probably the best way to find what works best for you.

It is said that you cannot fully conceptualize something unless your language has a word for it. Similar to spoken language, you may find that breaking out of spreadsheet coordinates (and even a dataframe-centric view of the world) reveals different questions to ask and enables innovated ways to solve problems. In this way, you reward your intellect while building more meaningful and relevant models and analysis.

Author(s): Alec Loudenback

Publication Date: 9 July 2020

Publication Site: JuliaActuary

Autocorrect errors in Excel still creating genomics headache

Link: https://www.nature.com/articles/d41586-021-02211-4

Graphic:

Excerpt:

In 2016, Mark Ziemann and his colleagues at the Baker IDI Heart and Diabetes Institute in Melbourne, Australia, quantified the problem. They found that one-fifth of papers in top genomics journals contained gene-name conversion errors in Excel spreadsheets published as supplementary data2. These data sets are frequently accessed and used by other geneticists, so errors can perpetuate and distort further analyses.

However, despite the issue being brought to the attention of researchers — and steps being taken to fix it — the problem is still rife, according to an updated and larger analysis led by Ziemann, now at Deakin University in Geelong, Australia3. His team found that almost one-third of more than 11,000 articles with supplementary Excel gene lists published between 2014 and 2020 contained gene-name errors (see ‘A growing problem’).

Simple checks can detect autocorrect errors, says Ziemann, who researches computational reproducibility in genetics. But without those checks, the errors can easily go unnoticed because of the volume of data in spreadsheets.

Author(s): Dyani Lewis

Publication Date: 13 August 2021

Publication Site: nature

Virtual Meetup: To Err is Human but to ISERR is Never OK!

Video description:

Have you ever built a perfect financial model without any errors? Thought not! And for that reason, all good modellers know they need to include some error checks. But what is not as clear is how many error checks you should have, when you should include them and what form they should take. Excel “helpfully” provided us with functions like ISERR, ISERROR and IFERROR but as you progress your modelling journey you should learn to avoid these functions. Plus, you also learn the sad truth that Excel can’t even do basic maths sometimes! Join us to hear from financial modelling specialist Andrew Berg, who has spent years building models, and so happily admits he has probably already made most of the mistakes you haven’t yet had a chance to! The good news is that he is willing to share the tips he has learned about the right types of error checks to add to your models so you don’t have to learn the hard way. ★Download the resources here ► https://plumsolutions.com.au/virtual-… ★Register for more meetups like this ► https://plumsolutions.com.au/meetup/ ★Connect with Andrew on Linkedin ► https://www.linkedin.com/in/andrew-be…

Author(s): Andrew Berg, Danielle Stein Fairhurst

Publication Date: 2 June 2021

Publication Site: YouTube

How the government’s mistaken prices disclosure derailed a big follow-on solicitation

Link: https://federalnewsnetwork.com/contracting/2021/07/how-the-governments-mistaken-prices-disclosure-derailed-a-big-follow-on-solicitation/

Excerpt:

When the Defense Information Systems Agency sought a new satellite services acquisition on behalf of the Navy, it included a spreadsheet so bidders could fill in their prices. But the spreadsheet included the prices from the current contract, which were supposed to be inaccessible. For how things turned out, Smith Pachter McWhorter procurement attorney Joe Petrillo joined Federal Drive with Tom Temin.

…..

Joe Petrillo: Sure. This is another excel spreadsheet disaster, and we talked about one a few weeks ago. It involved an acquisition of satellite telecom services for the Navy’s Military Sealift Command. It was an acquisition of commercial satellite telecommunications services. And they were divided into both bandwidth and non-bandwidth services. And the contract would be able to run to for up to 10 years in duration. Part of the contract, as you said, was an excel spreadsheet of the various different line items with blanks for offers to include their price. Unfortunately, this spreadsheet had hidden tabs, 19 hidden tabs, and those included, among other things, historical pricing information from the current contract. So Inmarsat, which was the incumbent contractor, holding that contract, notified the government and said, look you’ve disclosed our pricing information, do something about it. So the government deleted the offending spreadsheet from the SAM.gov website. But they understood and this was the case, third party aggregators had already downloaded it, and it was out there, it was available.

Author(s): Tom Temin, Joe Petrillo

Publication Date: 8 July 2021

Publication Site: Federal News Network

Have Fun With Approximations!

Link: https://www.linkedin.com/pulse/have-fun-approximations-mary-pat-campbell/

Graphic:

Pdf: https://drive.google.com/file/d/0ByabEDuWaN6FNmZhTDBYeEVrNVE/view?resourcekey=0-U4GI2_9zn4UQdWza1bq95w

Excerpt:

In the pre-computer days, people used these approximations due to having to do all calculations by hand or with the help of tables. Of course, many approximations are done by computers themselves — the way computers calculate functions such as sine() and exp() involves approaches like Taylor series expansions.

The specific approximation techniques I try (1 “exact” and 6 different approximation… including the final ones where I put approximations within approximations just because I can) are not important. But the concept that you should know how to try out and test approximation approaches in case you need them is important for those doing numerical computing.

Author(s): Mary Pat Campbell

Publication Date: 3 February 2016 (updated for links 2021)

Publication Site: LinkedIn, CompAct, Society of Actuaries

Several Ways to Improve Your Use of Excel for Actuarial Production

Link: https://www.soa.org/sections/small-insurance/small-insurance-newsletter/2021/june/stn-2021-06-mathys/

Graphic:

Excerpt:

Create a Consistent Structure for Calculations

When spreadsheets are created ad-hoc, the usage of time steps tends to be inconsistent: advancing by rows in one sheet, columns in another, and even a mix of the two in the same sheet. Sometimes steps will be weeks, other times months, quarters, or years. This is confusing for users and reviewers, leads to low trust, increases the time for updates and audits, and adds to the risks of the spreadsheet.

A better way is to make all calculations follow a consistent layout, either across rows or columns, and use that layout for all calculations, regardless if it requires a few more rows or columns. For example, one way to make calculations consistent is with time steps going across the columns and each individual calculation going down the rows:

Author(s): Stephan Mathys

Publication Date: June 2021

Publication Site: Small Talk at the Society of Actuaries

The tyranny of spreadsheets

Link: https://financialpost.com/fp-work/the-tyranny-of-spreadsheets-we-take-numbers-for-granted-until-we-run-out-of-them

Excerpt:

Somewhere in PHE’s data pipeline, someone had used the wrong Excel file format, XLS rather than the more recent XLSX. And XLS spreadsheets simply don’t have that many rows: 2 to the power of 16, about 64,000. This meant that during some automated process, cases had vanished off the bottom of the spreadsheet, and nobody had noticed.

The idea of simply running out of space to put the numbers was darkly amusing. A few weeks after the data-loss scandal, I found myself able to ask Bill Gates himself about what had happened. Gates no longer runs Microsoft, and I was interviewing him about vaccines for a BBC program called How to Vaccinate The World. But the opportunity to have a bit of fun quizzing him about XLS and XLSX was too good to pass up.

I expressed the question in the nerdiest way possible, and Gates’s response was so strait-laced I had to smile: “I guess… they overran the 64,000 limit, which is not there in the new format, so…” Well, indeed. Gates then added, “It’s good to have people double-check things, and I’m sorry that happened.”

Exactly how the outdated XLS format came to be used is unclear. PHE sent me an explanation, but it was rather vague. I didn’t understand it, so I showed it to some members of Eusprig, the European Spreadsheet Risks Group. They spend their lives analyzing what happens when spreadsheets go rogue. They’re my kind of people. But they didn’t understand what PHE had told me, either. It was all a little light on detail.

Author(s): Tim Harford

Publication Date: 29 June 2021

Publication Site: Financial Post

Python for Actuaries

Link: https://www.pathlms.com/cas/courses/15577/webinars/7402

Slides: https://cdn.fs.pathlms.com/p3Z78DJJRFWoqdziCQyf?_ga=2.2405433.801394078.1623949999-2118863750.1623949999#/

Graphic:

Description:

Explaining why actuaries may want to use the language python in their work, and providing a demo. Free recorded webcast, from the CAS.

Author(s): Brian Fannin, John Bogaardt

Publication Date: 6 February 2020

Publication Site: CAS Online Learning

How wearable AI could help you recover from covid

Link: https://www.technologyreview.com/2021/06/09/1025889/wearable-ai-body-sensor-covid-chicago/

Excerpt:

The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each person’s vital signs. The monitoring system alerts clinicians remotely when a patient’s vitals— such as heart rate—shift away from their usual levels. 

Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQ’s developers say their system is much more sensitive because it uses AI to understand each patient’s body, and its creators claim it is much more likely to anticipate important changes. 

“It’s an enormous benefit,” says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: “When you work in the emergency department it’s sad to see patients who waited too long to come in for help.  They would require intensive care on a ventilator. You couldn’t help but ask, ‘If we could have warned them four days before, could we have prevented all this?’”

Author(s): Rod McCullom

Publication Date: 9 June 2021

Publication Site: MIT Tech Review

COVID’s Unlikely Offspring: The Rise of Smartwatch as Illness Detector

Link: https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/covid-byproduct-smartwatch-increasingly-illness-detector

Excerpt:

Research is still in its early stage, but the last several months have seen a number of research efforts to increase the smartwatch’s illness detection capabilities. And it now looks like these tools will likely outlast the present pandemic.

Scripps Research has introduced an app called MyDataHelps as part of a study that tracks changes to a person’s sleep, activity level or resting heart rate. Fitbit is also building an algorithm that can detect COVID-19 before a person experiences symptoms. Meanwhile, Stanford Medicine researchers have developed a smartwatch alert system that can work on any wearable device, including Fitbit, Apple Watch and Garmin watches.

Michael Snyder, professor and chair of the Department of Genetics and director of Stanford Center for Genomics and Personalized Medicine at Stanford University, says watches can pick up signals of respiratory illnesses, even with asymptomatic cases. As COVID-19 hit, Snyder’s research increased “full blast,” he said. 

Author(s): Brian T. Horowitz

Publication Date: 6 March 2021

Publication Site: IEEE Spectrum

What the Colonial Pipeline ransomware attack can teach us about national cybersecurity defense

Link: https://thenextweb.com/news/what-the-colonial-pipeline-ransomware-attack-can-teach-us-about-national-cybersecurity-defense-syndication?mc_cid=c0c5baa839&mc_eid=983bcf5922

Excerpt:

There are no easy solutions to shoring up U.S. national cyber defenses.

Software supply chains and private sector infrastructure companies are vulnerable to hackers.

Many U.S. companies outsource software development because of a talent shortage, and some of that outsourcing goes to companies in Eastern Europe that are vulnerable to Russian operatives.

U.S. national cyber defense is split between the Department of Defense and the Department of Homeland Security, which leaves gaps in authority.

Author(s): Terry Thompson

Publication Date: 12 May 2021

Publication Site: The Next Web

Ransomware crooks post cops’ psych evaluations after talks with DC police stall

Link: https://arstechnica.com/gadgets/2021/05/ransomware-crooks-post-cops-psych-evaluations-after-talks-with-dc-police-stall/?mc_cid=c0c5baa839&mc_eid=983bcf5922

Excerpt:

A ransomware gang that hacked the District of Columbia’s Metropolitan Police Department (MPD) in April posted personnel records on Tuesday that revealed highly sensitive details for almost two dozen officers, including the results of psychological assessments and polygraph tests; driver’s license images; fingerprints; social security numbers; dates of birth; and residential, financial, and marriage histories.

….

The operators demanded $4 million in exchange for a promise not to publish any more information and provide a decryption key that would restore the data.

“You are a state institution, treat your data with respect and think about their price,” the operators said, according to the transcript. “They cost even more than 4,000,000, do you understand that?”

“Our final proposal is to offer to pay $100,000 to prevent the release of the stolen data,” the MPD negotiator eventually replied. “If this offer is not acceptable, then it seems our conversation is complete. I think we understand the consequences of not reaching an agreement. We are OK with that outcome.”

Author(s): Dan Goodin

Publication Date: 11 May 2021

Publication Site: Ars Technica