Causality can be intuitive and exciting, as many are hinting at. Most of what you see here comes from others, the way these are mixed and linked might look different. And this can be translated to present to anyone, see e.g.
(1). We think of causes and effects when investigating differences, and then changes, and potential changes: if all humans had the same weight and the same blood pressure (and everything else the same), then if one person developed hypertension all of a sudden, we would be scrambling for explanations.
Such differences can be seen as continuous, from a minimum to a maximum weight, or from a minimum possible continuous Systolic blood pressure (SysBP) to a maximum SysBP, or take discrete (more coarse) values, say just 2 of each: all persons under some average value are considered under-weight (labeled BMILow) and all over it are over-weight (BMIHigh), and similarly low SysBP (BPLow) and high SysBP (BPHigh): this ‘dichotomization’ is a simplification for the purpose of understanding, but also for guiding action: we need a threshold SysBP value over which doctors can tell patients they have a hypertension that needs treatment.
(2). We can observe a patient i having a low systolic BP (BPi, Low), but we can also think about their potential value under some imagined alternative conditions (yet in an ‘adjacent possible’ world…), like what if s/he lost 10 lbs./pounds? We can add more details for this patient i (like BPi, 200lbs., 5 feet tall, etc. We will label potential values using superscripts, like BPiIf Treated, e.g., to indicate these ‘happen up there’ (not here on the ground, in this reality); others use this notation too, e.g. Judith Lok [1] and Miguel Hernán & James Robins [2], Joffe, Yang & Feldman [3]). Note that this allows for thinking about some values that are un-realized (yet, or un-realizeable), BPiIf Treated, vs. realized BPi, Was Treated.
That’s how folks also talk about potential outcomes for a single person (case), which could differ under some different treatment conditions: BPi If Treated ≠ BPi If Untreated: the 1-person causal effect is just their difference: Effecti = BPiIf Treated – BPiIf Untreated.
(3). Here the language can shift to counter-factuals (CFs): these are (potentially) contrary-to-fact eventsSEE Note 1, like BPi* If Treated WAS not Treated (here the sub/superscript split bears fruit!). Note that some of these mental combinations can never be realized, hence observed, like this one just noted, or the converse BPi* If not Treated WAS Treated (for marking un-realizable, I added a *). Other values will be realized, for example BPi If Treated WAS Treated = BPi for this patient.
(4). We are ready now to clarify what causal calculus or inference really does: every person’s ‘data’ is always (at least) ½ missing, even in randomized clinical trials (RCT), because of one of these CFs is ‘beyond observational reach’/un-realizable:
(i) for RCT treated folks: Effecti½*RCT = BPi If Treated WAS Treated – BPi * If not Treated WAS Treated) and
(ii) for RCT untreated folks: Effectj½*RCT = BPj* If Treated WAS not Treated – BPj If not Treated WAS not Treated) .
This clarifies the causality conundrum present even when one randomizes cases: half of the data is missing! The benefit of RCTs is that we can safely replace the beyond-reach values (not individually, but on average):
(i) for RCT treated folks: the Average(BPi *, If not Treated WAS Treated) can be replaced by the observed Average(BPi If not Treated WAS not Treated ), observed because the ‘what if’ superscript matches the ‘what happened’ subscript!
(ii) for RCT untreated folks: the Average(BPj*, If Treated WAS not Treated) can be replaced by the observed Average(BPj If Treated WAS Treated).
In words, randomization allows us to safely assume that what happened (on average) to those treated had they not been treated is what actually happened to the untreated folks (on average), and conversely, what happened to those untreated had they been treated is what actually happened to the treated folks.
(5). From here on, one can examine extensions of such causal inquiries:
(i). What is the value of an outcome, say BP, if a randomized intervention was meant to lower it by means of losing weight? So we can ask what are these values: BPi ** Weight as If not Treated If WAS Treated and BPi **, Weight as If Treated If WAS not Treated ? Judea Pearl (and ‘causal’ mediation writers) call these nested CFs (like CFs ‘on steroids’, or ‘squared CFs’); there are 2 “If’s” in this quantity: BPi ** Weight as If WAS/not Treated & If WAS/not Treated. They literally are ‘more unobservable’ than even the ‘simple’ CFs: BPi * If not Treated WAS Treated and BPj* If Treated WAS not Treated, hence the double unobservable marker **!
(ii). Another relevant illustration here is one that happens in any RCT, around the issue of compliance: some patients do/not comply, and while some would just want to see ‘what is THE benefit’, regardless of compliance, a more pragmatic/realistic research questions point to the difference in outcome values between compliers only: what happened to treated folks who complied (those who didn’t… are pretty much untreated, aren’t they?) vs. the untreated folks who would have complied, had they been treated:
BPi Complied & Was Treated vs. BPi * If Complied Was not Treated
We cannot ‘see’ however the compliance of patients who have not been treated, so this is not observable BPi * If Complied Was not Treated (neither can we see BPi * If not Complied Was not Treated). Here we should therefore try to ‘guess’ who would-have-complied & would-not-have-complied from among the untreated folks.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ Do you want to learn more about causality? At UConn there are some options, one is the Causal Program Evaluation course that Dr. Eric Brunner has taught (some of his notes were used to make sense of the PO approach as envisioned by economists: more on how economists see causality is in Steve Cunningham’s online book). Felix Elwert is one of the best explainers/translators of such topics out there (see instrumental variables, and DAGs) , his workshop is a must; the Harvard causality team has a 5 day workshop that is thrilling; some have courses with materials posted online, e.g. Maya L. Petersen & Laura B. Balzer at UCLA.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Notice any discrepancies? Something missing? Email with suggestions: coman@uchc.edu ! Also, check the other more technical posts: http://evaluatehelp.blogspot.com/
Note 1: I would use ‘potential outcomes’ lingo to refer to any potential value/event, some realizable, some not, and counter-factuals (CFs) to refer to the truly contrary to fact, like the ‘impossible’ combinations, or the ones referring to the past alternative events (what if Oswald hadn’t shot Kennedy?’).
- Lok, J.J. and R.J. Bosch, Causal organic indirect and direct effects: Closer to the original approach to mediation analysis, with a product method for binary mediators. Epidemiology, 2021. 32(3): p. 412-420.
- Hernán, M. and J. Robins, Causal Inference. Online drafts at https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/. 2018.
- Joffe, M.M., W.P. Yang, and H.I. Feldman, Selective ignorability assumptions in causal inference. The International Journal of Biostatistics, 2010. 6(2).