Causal Inference About the Effects of Interventions From Observational Studies in Medical Journals

Link: https://jamanetwork.com/journals/jama/fullarticle/2818746?guestAccessKey=66ec96e3-d156-46cf-928b-ff8b2a8fc35e&utm_source=silverchair&utm_medium=email&utm_campaign=content_max-jamainternalmedicine&utm_content=olf&utm_term=051324&utm_adv=000004014036

Additional editors’ note: https://jamanetwork.com/journals/jama/fullarticle/2818747?guestAccessKey=8b28cc16-c1e5-4a09-bec6-1f77abfe98db&utm_source=silverchair&utm_medium=email&utm_campaign=content_max-jamainternalmedicine&utm_content=olf&utm_term=051324&utm_adv=000004014036

Graphic:

Abstract:

Importance  Many medical journals, including JAMA, restrict the use of causal language to the reporting of randomized clinical trials. Although well-conducted randomized clinical trials remain the preferred approach for answering causal questions, methods for observational studies have advanced such that causal interpretations of the results of well-conducted observational studies may be possible when strong assumptions hold. Furthermore, observational studies may be the only practical source of information for answering some questions about the causal effects of medical or policy interventions, can support the study of interventions in populations and settings that reflect practice, and can help identify interventions for further experimental investigation. Identifying opportunities for the appropriate use of causal language when describing observational studies is important for communication in medical journals.

Observations  A structured approach to whether and how causal language may be used when describing observational studies would enhance the communication of research goals, support the assessment of assumptions and design and analytic choices, and allow for more clear and accurate interpretation of results. Building on the extensive literature on causal inference across diverse disciplines, we suggest a framework for observational studies that aim to provide evidence about the causal effects of interventions based on 6 core questions: what is the causal question; what quantity would, if known, answer the causal question; what is the study design; what causal assumptions are being made; how can the observed data be used to answer the causal question in principle and in practice; and is a causal interpretation of the analyses tenable?

Conclusions and Relevance  Adoption of the proposed framework to identify when causal interpretation is appropriate in observational studies promises to facilitate better communication between authors, reviewers, editors, and readers. Practical implementation will require cooperation between editors, authors, and reviewers to operationalize the framework and evaluate its effect on the reporting of empirical research.

Author(s): Issa J. Dahabreh, MD, ScD1,2,3,4,5Kirsten Bibbins-Domingo, PhD, MD, MAS6,7,8

Publication Date: 9 May 2024

Publication Site: JAMA

doi:10.1001/jama.2024.7741

An Actuarial View of Correlation and Causation—From Interpretation to Practice to Implications

Link: https://www.actuary.org/sites/default/files/2022-07/Correlation.IB_.6.22_final.pdf

Graphic:

Excerpt:

Examine the quality of the theory behind the correlated variables. Is there good
reason to believe, as validated by research, the variables would occur together? If such
validation does not exist, then the relationship may be spurious. For example, is there
any validation to the relationship between the number of driver deaths in railway
collisions by year (the horizontal axis), and the annual imports of Norwegian crude
oil by the U.S., as depicted below?36 This is an example of a spurious correlation. It is
not clear what a rational explanation would be for this relationship.

Author(s): Data Science and Analytics Committee

Publication Date: July 2022

Publication Site: American Academy of Actuaries

Leaders: Stop Confusing Correlation with Causation

Link:https://hbr.org/2021/11/leaders-stop-confusing-correlation-with-causation

Excerpt:

A 2020 Washington Post article examined the correlation between police spending and crime. It concluded that, “A review of spending on state and local police over the past 60 years…shows no correlation nationally between spending and crime rates.” This correlation is misleading. An important driver of police spending is the current level of crime, which creates a chicken and egg scenario. Causal research has, in fact, shown that more police lead to a reduction in crime.

….

Yelp overcame a similar challenge in 2015. A consulting report found that companies that advertised on the platform ended up earning more business through Yelp than those that didn’t advertise on the platform. But here’s the problem: Companies that get more business through Yelp may be more likely to advertise. The former COO and I discussed this challenge and we decided to run a large-scale experiment that gave packages of advertisements to thousands of randomly selected businesses. The key to successfully executing this experiment was determining which factors were driving the correlation. We found that Yelp ads did have a positive effect on sales, and it provided Yelp with new insight into the effect of ads.

Author(s): Michael Luca

Publication Date: 5 Nov 2021

Publication Site: Harvard Business Review

Causal design patterns for data analysts

Link: https://emilyriederer.netlify.app/post/causal-design-patterns/

Graph:

Excerpt:

One antidote to this is true experimentation in which treatment is randomly assigned within the homogenous target population. Experimentation, particularly A/B tests, have become a mainstay of industry data science, so why observational causal inference matters?

Some situations you cannot test due to ethics or reputational risk

Even when you can experiment, understanding observational causal inference can help you better identify biases and design your experiments

Testing can be expensive. There are direct costs (e.g. testing a marketing promotion) of instituting a policy that might not be effective, implementation costs (e.g. having a tech team implement a new display), and opportunity costs (e.g. holding out a control group and not applying what you hope to be a profitable strategy as broadly as possible)2

Randomized experimentation is harder than it sounds! Sometimes experiments may not go as planned, but treating the results as observational data may help salvage some information value

Data collection can take time. We may want to read long-term endpoints like customer retention or attrition after many year. When we long to read an experiment that wasn’t launched three years ago, historical observational data can help us get a preliminary answer sooner

It’s not either-or but both-and. Due to the financial and temporal costs of experimentation, causal inference can also be a tool to help us better prioritize what experiments are worth running

Author: Emily Riederer

Publication Date: 30 January 2021

Publication Site: Emily Riederer