Narrow or prejudiced thinking is simple to write down and easy to copy and paste over and over. Descriptions such as “difficult” and “disruptive” can become hard to escape. Once so labeled, patients can experience “downstream effects,” said Dr. Hardeep Singh, an expert in misdiagnosis who works at the Michael E. DeBakey Veterans Affairs Medical Center in Houston. He estimates misdiagnosis affects 12 million patients a year.
Conveying bias can be as simple as a pair of quotation marks. One team of researchers found that Black patients, in particular, were quoted in their records more frequently than other patients when physicians were characterizing their symptoms or health issues. The quotation mark patterns detected by researchers could be a sign of disrespect, used to communicate irony or sarcasm to future clinical readers. Among the types of phrases the researchers spotlighted were colloquial language or statements made in Black or ethnic slang.
“Black patients may be subject to systematic bias in physicians’ perceptions of their credibility,” the authors of the paper wrote.
That’s just one study in an incoming tide focused on the variations in the language that clinicians use to describe patients of different races and genders. In many ways, the research is just catching up to what patients and doctors knew already, that discrimination can be conveyed and furthered by partial accounts.
Examine the quality of the theory behind the correlated variables. Is there good reason to believe, as validated by research, the variables would occur together? If such validation does not exist, then the relationship may be spurious. For example, is there any validation to the relationship between the number of driver deaths in railway collisions by year (the horizontal axis), and the annual imports of Norwegian crude oil by the U.S., as depicted below?36 This is an example of a spurious correlation. It is not clear what a rational explanation would be for this relationship.
Systemic Influences and Socioeconomics ❑ Checking for and removing of systemic biases is difficult. ❑ Systemic biases can creep in at every step of the modeling process: data, algorithms, and validation of results. ❑ Human involvement in designing and coding algorithms, where there is a lack of diversity among coders ❑ Biases embedded in training datasets ❑ Use of variables that proxy for membership in a protected class ❑ Statistical discrimination profiling shopping behavior, such as price optimization ❑ Technology-facilitated advertising algorithms used in ad targeting and ad delivery
Author(s): David Sandberg, Data Science and Analytics Committee, AAA
For anti-racist dataviz, our most effective tool is context. The way that data is framed can make a very real impact on how it’s interpreted. For example, this case study from the New York Times shows two different framings of the same economic data and how, depending on where the author starts the X-Axis, it can tell 2 very different — but both accurate — stories about the subject.
As Pieta previously highlighted, dataviz in spaces that address race / ethnicity are sensitive to “deficit framing.” That is, when it’s presented in a way that over-emphasizes differences between groups (while hiding the diversity of outcomes within groups), it promotes deficit thinking (see below) and can reinforce stereotypes about the (often minoritized) groups in focus.
In a follow up study, Eli and Cindy Xiong (of UMass’ HCI-VIS Lab) confirmed Pieta’s arguments, showing that even “neutral” data visualizations of outcome disparities can lead to deficit thinking (and therefore stereotyping) and that the way visualizations are designed can significantly impact these harmful tendencies.