How wearable AI could help you recover from covid

Link: https://www.technologyreview.com/2021/06/09/1025889/wearable-ai-body-sensor-covid-chicago/

Excerpt:

The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each person’s vital signs. The monitoring system alerts clinicians remotely when a patient’s vitals— such as heart rate—shift away from their usual levels. 

Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQ’s developers say their system is much more sensitive because it uses AI to understand each patient’s body, and its creators claim it is much more likely to anticipate important changes. 

“It’s an enormous benefit,” says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: “When you work in the emergency department it’s sad to see patients who waited too long to come in for help.  They would require intensive care on a ventilator. You couldn’t help but ask, ‘If we could have warned them four days before, could we have prevented all this?’”

Author(s): Rod McCullom

Publication Date: 9 June 2021

Publication Site: MIT Tech Review

We need to design distrust into AI systems to make them safer

Link: https://www.technologyreview.com/2021/05/13/1024874/ai-ayanna-howard-trust-robots/

Excerpt:

Since that experiment, have you seen this phenomenon replicated in the real world?

Every time I see a Tesla accident. Especially the earlier ones. I was like, “Yep, there it is.” People are trusting these systems too much. And I remember after the very first one, what did they do? They were like, now you’re required to hold the steering wheel for something like five-second increments. If you don’t have your hand on the wheel, the system will deactivate.

But, you know, they never came and talked to me or my group, because that’s not going to work. And why that doesn’t work is because it’s very easy to game the system. If you’re looking at your cell phone and then you hear the beep, you just put your hand up, right? It’s subconscious. You’re still not paying attention. And it’s because you think the system’s okay and that you can still do whatever it was you were doing—reading a book, watching TV, or looking at your phone. So it doesn’t work because they did not increase the level of risk or uncertainty, or disbelief, or mistrust. They didn’t increase that enough for someone to re-engage.

Author(s): Karen Hao

Publication Date: 13 May 2021

Publication Site: MIT Tech Review

Error-riddled data sets are warping our sense of how good AI really is

Link: https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-machine-learning-progress/

Paper link: https://arxiv.org/pdf/2103.14749.pdf

Graphic:

Excerpt:

Yes, but: In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains racist and sexist labels as well as photos of people’s faces obtained without consent. The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.

Author(s): Karen Hao

Publication Date: 1 April 2021

Publication Site: MIT Tech Review

Ethics and use of Data Sources for Underwriting ft. Neil Raden and Kevin Pledge -NSNA(Ep.4)

Video:

Description:

The video features Neil Raden who is the author of ethical use of AI for Actuaries. Alongside him , it features Kevin Pledge who is CEO of Acceptiv , FSA,FIA and chair of Innovation and Research Committee of SOA. We discuss about the issue of ethics and about the use of new data sources in the recent Emerging issues in Underwriting Survey Report by IfOA.

Authors: Harsh Jaitak, Kevin Pledge, Neil Raden

Publication Date: 17 March 2021

Publication Site: TBD Actuarial at YouTube

AI, Privacy, Racial Bias Among State Insurance Regulator Priorities for 2021

Link: https://www.carriermanagement.com/news/2021/02/10/216927.htm

Excerpt:

The NAIC 2021 priorities and the charges to its key committees are (in no specific order):

COVID-19 — In 2021, the NAIC will continue its “Priority One” initiative designed to support state insurance departments in their response to the ongoing pandemic and its impact on consumers and insurance markets. NAIC has a COVID resource page that includes information on actions taken by individual states in response to the COVID 19 pandemic that impact various lines of insurance. NAIC said insurance regulators will continue to analyze data and develop the tools so that consumer protection keeps pace with changes brought on by the virus.

Big Data/Artificial Intelligence — The Big Data and Artificial Intelligence Working Group is chaired by Doug Ommen, Iowa, joined by Elizabeth Kelleher Dwyer, co-vice chair, Rhode Island and Mark Afable, co-vice chair, Wisconsin.

…..

Race & Insurance — The Special Committee on Race and Insurance is co-chaired by Maine Superintendent Eric Cioppa and New York Executive Deputy Superintendent of Insurance My Chi To.

The 2021 agenda for this panel calls for research into the level of diversity and inclusion within the insurance sector; engagement with a broad group of stakeholders on issues related to race, diversity and inclusion in, and access to, the insurance sector and insurance products; and an examination of current practices or barriers in the insurance sector that potentially disadvantage people of color and historically underrepresented groups.

Publication Date: 10 February 2021

Publication Site: Carrier Management