Pulse oximeters’ inaccuracies in darker-skinned people require urgent action, AGs tell FDA

`

Link:https://www.statnews.com/2023/11/07/pulse-oximeters-attorneys-general-urge-fda-action/

Excerpt:

More than two dozen attorneys general are urging Food and Drug Administration officials to take urgent action to address disparities in how well pulse oximeters, the fingertip devices used to monitor a person’s oxygen levels, work on people with darker skin.

In a Nov. 1 letter, the AGs noted that it had been a year since the FDA convened a public meeting of experts, who called for clearer labeling and more rigorous testing of the devices, and that no action had been taken.

“We, the undersigned Attorneys General, write to encourage the FDA to act with urgency to address the inaccuracy of pulse oximetry when used on people with darker toned skin,” said the letter, written by California Attorney General Rob Bonta and signed by 24 other attorneys general.

Pulse oximeters’ overestimation of oxygen levels in patients with darker skin has, in a slew of recent research studies, been linked to poorer outcomes for many patients because of delayed diagnosis, delayed hospital admissions, and delayed access to treatment, including for severe Covid-19 infections. Higher amounts of pigments called melanin in darker skin interfere with the ability of light-based sensors in pulse oximeters to detect oxygen levels in blood.

….

The delay has frustrated health care workers who use pulse oximeters and have studied them and followed the progress toward creating new devices that work better. “I just get mad that these things are not on the market,” Theodore J. Iwashyna, an ICU physician at Johns Hopkins, told STAT. “Just last week in my ICU, I had a patient whose pulse oximeter was reading 100% at the same time that his arterial blood gas showed that his oxygen levels were dangerously low. I need these things to work, and work in all my patients.”

Author(s):Usha Lee McFarling

Publication Date: 7 Nov 2023

Publication Site: STAT News

The insurance industry’s renewed focus on disparate impacts and unfair discrimination

Link: https://www.milliman.com/en/insight/the-insurance-industrys-renewed-focus-on-disparate-impacts-and-unfair-discrimination

Excerpt:

As consumers, regulators, and stakeholders demand more transparency and accountability with respect to how insurers’ business practices contribute to potential systemic societal inequities, insurers will need to adapt. One way insurers can do this is by conducting disparate impact analyses and establishing robust systems for monitoring and minimizing disparate impacts. There are several reasons why this is beneficial:

  1. Disparate impact analyses focus on identifying unintentional discrimination resulting in disproportionate impacts on protected classes. This potentially creates a higher standard than evaluating unfairly discriminatory practices depending on one’s interpretation of what constitutes unfair discrimination. Practices that do not result in disparate impacts are likely by default to also not be unfairly discriminatory (assuming that there are also no intentionally discriminatory practices in place and that all unfairly discriminatory variables codified by state statutes are evaluated in the disparate impact analysis).
  2. Disparate impact analyses that align with company values and mission statements reaffirm commitments to ensuring equity in the insurance industry. This provides goodwill to consumers and provides value to stakeholders.
  3. Disparate impact analyses can prevent or mitigate future legal issues. By proactively monitoring and minimizing disparate impacts, companies can reduce the likelihood of allegations of discrimination against a protected class and corresponding litigation.
  4. If writing business in Colorado, then establishing a framework for assessing and monitoring disparate impacts now will allow for a smooth transition once the Colorado bill goes into effect. If disparate impacts are identified, insurers have time to implement corrections before the bill is effective.

Author(s): Eric P. Krafcheck

Publication Date: 27 Sept 2021

Publication Site: Milliman

Evaluating Unintentional Bias in Private Passenger Automobile Insurance

Link: https://disb.dc.gov/page/evaluating-unintentional-bias-private-passenger-automobile-insurance

Public Hearing Notice: Evaluating Unintentional Bias in Private Passenger Automobile Insurance, June 29, 2022, 3 pm

Excerpt:

In 2020, Commissioner Karima Woods, Commissioner for the District of Columbia Department of Insurance, Securities and Banking (DISB) directed the creation of the Department’s first Diversity Equity and Inclusion Committee to engage in a wide-ranging review of financial equity and inclusion and to make recommendations to remove barriers to accessing financial services. Department staff developed draft initiatives, including an initiative related to insurers’ use of factors such as credit scores, education, occupation, home ownership and marital status in underwriting and ratemaking. Stakeholder feedback on this draft initiative resulted in the Department concluding that data was necessary to properly address this initiative. Department staff conducted research and contacted subject matter experts before determining that relevant data was not generally available.

The Department is undertaking this project to collect the relevant data. We determined this initiative will be deliberative and transparent to ensure the resultant data would address the issue of unintentional bias. We also decided to initially focus on private passenger automobile insurance as that is a line of insurance that affects many District consumers and has previously had questions raised about the use of non-driving factors. The collected data will build on previous work done by the Department through the 2018 and 2019 public hearings and examinations that looked at private passenger automobile insurance ratemaking methodologies.

For this project to look at the potential for unintentional bias in auto insurance, DISB will conduct a review of auto insurers’ rating and underwriting methodologies. As a first step, DISB will hold a public hearing on Wednesday, June 29, 2022 at 3 pm to gather stakeholder input on the review plan, which is outlined below. The Department has engaged the services of O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) to assist the Department and provide subject matter expertise. Additionally, the Department will hold one or more meetings to follow up on any items raised during the public hearing.

Publication Date: accessed 18 Jun 2022

Publication Site: District of Columbia Department of Insurance, Securities & Banking

The EEOC wants to make AI hiring fairer for people with disabilities

Link: https://www.brookings.edu/blog/techtank/2022/05/26/the-eeoc-wants-to-make-ai-hiring-fairer-for-people-with-disabilities/

Excerpt:

That hiring algorithms can disadvantage people with disabilities is not exactly new information. In 2019, for my first piece at the Brookings Institution, I wrote about how automated interview software is definitionally discriminatory against people with disabilities. In a broader 2018 review of hiring algorithms, the technology advocacy nonprofit Upturn concluded that “without active measures to mitigate them, bias will arise in predictive hiring tools by default” and later notes this is especially true for those with disabilities. In their own report on this topic, the Center for Democracy and Technology found that these algorithms have “risk of discrimination written invisibly into their codes” and for “people with disabilities, those risks can be profound.” This is to say that there has long been broad consensus among experts that algorithmic hiring technologies are often harmful to people with disabilities, and that given that as many as 80% of businesses now use these tools, this problem warrants government intervention.

….

The EEOC’s concerns are largely focused on two problematic outcomes: (1) algorithmic hiring tools inappropriately punish people with disabilities; and (2) people with disabilities are dissuaded from an application process due to inaccessible digital assessments.

Illegally “screening out” people with disabilities

First, the guidance clarifies what constitutes illegally “screening out” a person with a disability from the hiring process. The new EEOC guidance presents any disadvantaging effect of an algorithmic decision against a person with a disability as a violation of the ADA, assuming the person can perform the job with legally required reasonable accommodations. In this interpretation, the EEOC is saying it is not enough to hire candidates with disabilities in the same proportion as people without disabilities. This differs from EEOC criteria for race, religion, sex, and national origin, which says that selecting candidates at a significantly lower rate from a selected group (say, less than 80% as many women as men) constitutes illegal discrimination.

Author(s): Alex Engler

Publication Date: 26 May 2022

Publication Site: Brookings

Professionalism Webinar Examines Unfair
Discrimination in Insurance

Link: https://www.actuary.org/sites/default/files/2022-05/Actuarial-Update-May-2022.pdf

Graphic:

Excerpt:

THE ACADEMY hosted a May 26 webinar, “What Is Unfair Discrimination in Insurance?” in which presenters explored the current regulatory infrastructure relating to unfair and unlawful discrimination in insurance and the challenges presented by the increased use of big data and artificial Intelligence (AI)-enabled systems.


Presenters were Daniel Schwarcz, an award-winning professor and scholar; former Illinois Director of Insurance Nat Shapo; and Brian Mullen, chairperson of the task force currently revising ASOP No. 12, Risk Classification ( for All Practice Areas). General Counsel and Director of Professionalism Brian Jackson moderated.

Mullen opened by providing background on ASOP No. 12. Schwarcz discussed prohibitions on “unfair discrimination”—which occurs when an insurer considers factors unrelated to actuarial risk—in rates and underwriting. He noted that machine learning AI tends to produce the same results as intentional proxy discrimination. As a result, insurance becomes less available and less affordable to individuals because of their race, sex, genetics, health, or income. He also discussed a proposed definition of proxy discrimination, practical tests for proxy discrimination, and the benefits of such a definition.

Publication Date: May 2022

Publication Site: American Academy of Actuaries

METHODS FOR QUANTIFYING DISCRIMINATORY EFFECTS ON PROTECTED CLASSES IN INSURANCE

Link: https://www.casact.org/sites/default/files/2022-03/Research-Paper_Methods-for-Quantifying-Discriminatory-Effects.pdf

Graphic:

Excerpt:

This research paper’s main objective is to inspire and generate discussions
about algorithmic bias across all areas of insurance and to encourage
actuaries to be involved. Evaluating financial risk involves the creation of
functions that consider myriad characteristics of the insured. Companies utilize
diverse statistical methods and techniques, from relatively simple regression
to complex and opaque machine learning algorithms. It has been alleged that
the predictions produced by these mathematical algorithms have
discriminatory effects against certain groups of society, known as protected
classes.
The notion of discriminatory effects describes the disproportionately adverse
effect algorithms and models could have on protected groups in society. As a
result of the potential for discriminatory effects, the analytical processes
followed by financial institutions for decision making have come under greater
scrutiny by legislators, regulators, and consumer advocates. Interested parties
want to know how to quantify such effects and potentially how to repair such
systems if discriminatory effects have been detected.


This paper provides:


• A historical perspective of unfair discrimination in society and its impact
on property and casualty insurance.
• Specific examples of allegations of bias in insurance and how the various
stakeholders, including regulators, legislators, consumer groups and
insurance companies have reacted and responded to these allegations.
• Some specific definitions of unfair discrimination and that are interpreted
in the context of insurance predictive models.
• A high-level description of some of the more common statistical metrics
for bias detection that have been recently developed by the machine
learning community, as well as a brief account of some machine learning
algorithms that can help with mitigating bias in models.


This paper also presents a concrete example of an insurance pricing GLM
model developed on anonymized French private passenger automobile data,
which demonstrates how discriminatory effects can be measured and
mitigated.

Author(s): Roosevelt Mosley, FCAS, and Radost Wenman, FCAS

Publication Date: March 2022

Publication Site: CAS