{"id":69,"date":"2025-04-22T21:55:10","date_gmt":"2025-04-23T01:55:10","guid":{"rendered":"https:\/\/health.uconn.edu\/causality\/?page_id=69"},"modified":"2025-07-28T11:02:21","modified_gmt":"2025-07-28T15:02:21","slug":"toolscause","status":"publish","type":"page","link":"https:\/\/health.uconn.edu\/causality\/toolscause\/","title":{"rendered":"Tools to evaluate causality"},"content":{"rendered":"<p><span style=\"text-decoration: underline\"><strong>(3).<\/strong> Tools to evaluate causality from different scientific domains<\/span><\/p>\n<p>***Some preliminary ground-setting first; causality rests on contrary-to-fact assumptions: this \u201cwhat if\u2026?\u201d permeates all statistical modeling: the hypothesis testing procedure starts with a long list of \u2018if\u2019s, e.g. \u2018if patients are independent cases, if the sample was selected randomly, if the distribution of the outcome resembles a normal distribution\u2019, and so on (multiple regression implies several assumptions).<\/p>\n<p><strong>(i). <\/strong>The path analytic \u2018tracing rule\u2019 and \u2018causal calculus\u2019.<\/p>\n<p>*** The \u2018tracing rule\u2019 is a visual inspection rule that allows one to \u2018turn correlation into causation\u2019 by decomposing a correlation into its causal and non-causal components. It has been updated recently to handle visually Judea Pearl\u2019s \u2018causal calculus\u2019<a href=\"#_edn1\" name=\"_ednref1\"><span>[i]<\/span><\/a>: deriving observational\/associational consequences from hypothesized\/known causal structures. I will show that this simply means turning expressions that contain \u2018unobservables\u2019 (counter-factuals) of the form A1c<strong><em><sub>i<\/sub><\/em><sup> If.Lower.BMI<\/sup><\/strong><sub> <\/sub>into the observable counterparts A1c<strong><em><sub>i<\/sub><\/em><\/strong><strong><sub>Lowered.BMI<\/sub><\/strong>.<\/p>\n<p>To take the simple example from part 1, one common assumption (or expectation, \u2018hypothesis\u2019) is that<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><sup> If.LowerBMI<\/sup><\/strong><sub> <\/sub><strong><sub>&lt;<\/sub><\/strong> A1c<strong><em><sub>i <\/sub><\/em><\/strong><em><sup>If.HigherBMI<\/sup><\/em><strong><sub><br \/>\n<\/sub><\/strong>which says that if a patient\u2019s BMI were to drop (\u2018IF\u2019), his\/her A1c would drop too; ignoring for now how this A1c(BMI) functional relation might really look like<a href=\"#_edn2\" name=\"_ednref2\"><span>[ii]<\/span><\/a>, some U shape, or more complicated, we commonly just try to \u2018confirm\u2019 the presence of this effect Effect<sup> <\/sup><strong><sub>BMI -&gt; A1c<\/sub><\/strong> by \u2018fitting\u2019 (forcing unto data!) a linear relation, by means of a linear regression, written at the level of patient i,\u00a0 like<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><\/strong> = Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + error<strong><em><sub>i<\/sub><\/em><\/strong> where the little dot<strong>. <\/strong>in Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> reminds us that this effect is one and the same for all folks, i.e. there is no I subscript! If we could derive a universal value (like the g value for the gravity force between 2 bodies), treatments and interventions would be much simpler! This gets complicated when we add a third variable, say Systolic blood pressure SysBP; because we can assume this model behind the data:<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><\/strong> = Effect<strong>*.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + Effect<strong>*.<\/strong><strong><sub>SysBP -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> SysBP<strong><em><sub>i<\/sub><\/em><\/strong> +error<strong>*<em><sub>i<\/sub><\/em><\/strong> (we ignore the \u2018intercept\u2019 term, the conditional mean), where we added <strong>*<\/strong> (per Reichenbach too [1], p. 137) to mark \u2018a change\u2019 in the initial quantity: the initial effect Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> changes to Effect<strong>*.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong>, it becomes now \u201cthe effect of BMI on A1c, controlled for the other effect, of SysBP on A1c\u2019. Commonly, when SysBP and BMI correlate positively, Effect<strong>*.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> &lt; Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong>.<\/p>\n<p>*** Path analysis was the first formal method promising to separate out the causal and non-causal components from the \u2018surface\u2019 association\/correlation BMI\u2194A1c: The answer is simply: (1). Direct causal effect BMI-&gt;A1c; (2). Direct causal effect A1c-&gt;BMI; (3). Causal effects on both from a common cause 3rd-&gt;A1c &amp; 3rd-&gt;BMI; (4). Other combinations of these where other variables are involved, like causes of these 3 variables: spelling out graphically how we expect them to be related causally produces a path model<a href=\"#_edn3\" name=\"_ednref3\"><span>[iii]<\/span><\/a>; alternatively, some use equations.<\/p>\n<p>If BMI -&gt; SysBP -&gt; A1c (with no direct BMI -&gt; A1c causal effect), we would expect to see in observational data the correlation between BMI &amp; A1c to be about the product of the correlation between BMI &amp; SysBP and the correlation between SysBP &amp; A1c; if however the causal real-world looks instead like BMI -&gt; A1c -&gt; SysBP, different observational consequences ensue.<a href=\"#_edn4\" name=\"_ednref4\"><span>[iv]<\/span><\/a><\/p>\n<p>*** Let\u2019s see how would this work, why it does:<\/p>\n<p>If we have some \u2018insight\u2019 that one predictor is actually both predicting the final outcome, but also be itself an outcome of the other predictor, we have a second equation to show<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><\/strong> = Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + Effect<strong>.<\/strong><strong><sub>SysBP -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> SysBP<strong><em><sub>i<\/sub><\/em><\/strong> + error<strong><sub>A1c <em>i<\/em><\/sub><\/strong><\/p>\n<p>SysBP<strong><em><sub>i<\/sub><\/em><\/strong> = Effect<strong>.<\/strong><strong><sub>BMI -&gt; <\/sub><\/strong><strong><sub>SysBP<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + error<strong><sub>SysBP <em>i<\/em><\/sub><\/strong><\/p>\n<p>Which invites replacing from the second expression into the first, to find out the A1c effect:<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><\/strong> = Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + Effect<strong>.<\/strong><strong><sub>SysBP -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> (Effect<strong>.<\/strong><strong><sub>BMI -&gt; <\/sub><\/strong><strong><sub>SysBP<\/sub><\/strong> <strong>\u00b7<\/strong> BMI<strong><em><sub>i<\/sub><\/em><\/strong> + error<strong><sub>SysBP <em>i<\/em><\/sub><\/strong>) + error<strong><sub>A1c <em>i<\/em><\/sub><\/strong><\/p>\n<p>so we get<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><\/strong> = BMI<strong><em><sub>i<\/sub><\/em><\/strong> <strong>\u00b7 (<\/strong>Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong>\u00a0 + Effect<strong>.<\/strong><strong><sub>SysBP -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> Effect<strong>.<\/strong><strong><sub>BMI -&gt; <\/sub><\/strong><strong><sub>SysBP<\/sub><\/strong>) + (error<strong><sub>A1c <em>i<\/em><\/sub><\/strong> + error<strong><sub>SysBP <em>i<\/em><\/sub><\/strong>)<\/p>\n<p>This directly \u2018demonstrates\u2019 for the 3 variable indirect effect model the \u2018tracing rule\u2019<a href=\"#_edn5\" name=\"_ednref5\"><span>[v]<\/span><\/a>, but we express it in the Barron &amp; Kenny [5] famous language instead: c = c\u2019 + a<strong>\u00b7<\/strong>b, or the total effect of BMI, c, is composed of a direct effect Effect<strong>.<\/strong><strong><sub>BMI -&gt; A1c<\/sub><\/strong> c\u2019, and an \u2018indirect effect\u2019 ind. through the mediator SysBP: Effect<strong>.<\/strong><strong><sub>SysBP -&gt; A1c<\/sub><\/strong> <strong>\u00b7<\/strong> Effect<strong>.<\/strong><strong><sub>BMI -&gt; <\/sub><\/strong><strong><sub>SysBP<\/sub><\/strong>. ind. = a<strong>\u00b7<\/strong>b .<\/p>\n<p>*** Note that this is a rather big departure from the \u20182 merely related predictors\u2019 model: the BMI -&gt; A1c effect there was just the direct effect seen here in the mediation model: mediation changes the very logic and interpretation of the \u2018hypothesized effect\u2019 we are searching for! Despite this obvious state of affairs, some still resist the \u2018indirect effect\u2019 approach and even its very wording, preferring to talk about tenebrious constructs, like \u2018seemingly unrelated regressions\u2019 [6].<\/p>\n<p>*** A more important causation note: turning a co-predictor into a mediator by allowing a direct affect between the primary and (now) secondary predictor changes the causal setting and implications. The best example is the \u2018Race &#8211;&gt; Health Outcome\u2019 research questions, which are regularly evaluated by including income, e.g., as covariate, to \u2018control for it\u2019, i.e. give us the \u2018pure\u2019 racial difference effect; the problem is however that Income might causally follow Race, and hence should NOT be controlled for.<\/p>\n<p>+++ Moreover, the \u2018gender hiring discrimination\u2019 question, shown by Judea Pearl in <a href=\"https:\/\/en.wikipedia.org\/wiki\/The_Book_of_Why\">BoW<\/a> (section \u2018In search of a language (the Berkeley admissions paradox)\u2019 also <a href=\"https:\/\/modeling.uconn.edu\/speaker\/judea-pearl\/\">see MMM<\/a>) may turn out to be answerable by assessing the direct effect, distinct from the indirect effect (through say Department type, or even Qualifications ).<\/p>\n<p>*** It can be shown that any statistical model can be \u2018solved for\u2019 using the tracing rule, without any software aid: instrumental variable model, latent (common factor) model, latent growth model, etc.<\/p>\n<p><strong>(ii). <\/strong>Other fields: POs and econom~ics\/etrics<\/p>\n<p>Several \u2018tools\u2019<a href=\"#_edn6\" name=\"_ednref6\"><span>[vi]<\/span><\/a> (\u2018hammers\u2019 for the same \u2018nail\u2019) exist, the graphical ones<a href=\"#_edn7\" name=\"_ednref7\"><span>[vii]<\/span><\/a> are more intuitive, so I focus on them; e.g.:<\/p>\n<ol>\n<li>Directed Acyclic Graphs<strong><sup>ModernCausality<\/sup><\/strong>\u00a0* Elias Barenboim developed a revolutionary tool that coded the entire \u2019causal calculus\u2019 math and can derive step by step the implications of any causal model: <a href=\"https:\/\/www.causalfusion.net\/\">CausalFusion.net<\/a> (requires free registration).<\/li>\n<\/ol>\n<ol start=\"2\">\n<li>Propensity Score<a href=\"#_edn8\" name=\"_ednref8\"><span>[viii]<\/span><\/a> <u>Matching<\/u><em><sup>ClassicalStats<\/sup><\/em> (<u>Matching and Subclassification, both<\/u> under the Potential Outcomes Causal Model<em><sup>ClassicalStats<\/sup><\/em>)<\/li>\n<li>Regression Discontinuity<strong><sup>Economics<\/sup><\/strong><\/li>\n<li>Instrumental Variables<strong><sup>Economics<\/sup><\/strong><\/li>\n<li>Difference-in-Differences<strong><sup>Economics<\/sup><\/strong><\/li>\n<li>Synthetic Control<sup>New<strong>Economics<\/strong><\/sup><\/li>\n<\/ol>\n<p>Other tools (still emerging, being explored)a re mentioned in the \u2018Opportunities\u2019 part 4.<\/p>\n<p>*** The last part, # 4, will briefly go over some remaining challenges and opportunities for both advancing this field, and for better explaining it, like the \u2018equivalence of potential outcomes (\u2018Rubin\u2019, more properly Cochran\u2019s\u2026 see note <em>viii<\/em> below and image insert) and causal calculus (Pearl) approaches to causality\u2019.<\/p>\n<p><span style=\"text-decoration: underline\"><strong>FOOTnotes:<\/strong><\/span><\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><span>[i]<\/span><\/a> \u201cBoW, p. &amp; \u201cIt happened not because I am smarter but because I took Sewall Wright&#8217;s idea seriously and milked it to its logical conclusions as much as I could.\u201d <a href=\"https:\/\/listserv.ua.edu\/cgi-bin\/wa?A2=SEMNET;a3f2e16.1806\">SEMNET<\/a><\/p>\n<p><a href=\"#_ednref2\" name=\"_edn2\"><span>[ii]<\/span><\/a> We cannot hope to one day uncover the exact form of this relation, like in physics we have the pressure, volume, and temperature related through a clear law-like relation, or even like some economists propose specific relations between their variables, like in the<\/p>\n<p><a href=\"#_ednref3\" name=\"_edn3\"><span>[iii]<\/span><\/a> \u201cA statistical model is a formal representation of a set of relationships between variables. [\u2026] Sometimes the designation between independent and dependent variable depends on the variables under study and the\u00a0 researcher&#8217;s theoretical orientation. For instance, researchers study the relationship between self-esteem and academic performance. Some designate self-esteem as the independent variable and academic performance as the dependent variable. Others reverse the designations. [\u2026] A representation of a model that uses arrows is called a path diagram.\u201d [2] p. 184-5<\/p>\n<p><a href=\"#_ednref4\" name=\"_edn4\"><span>[iv]<\/span><\/a> Note that the \u2018no direct effect, the entire effect is indirect, through a mediator\u2019 situation is not far-fetched: the famous one is the \u2018blessing of the cars \u00e0 auto accidents\u2019 intervention \u00a0[3]:, which worked only through the \u2018drivers using more often seat belts\u2019 mediator: the direct effect is not logically possible.<\/p>\n<p>Also of note: rarely we have only three variables on hand to investigate, the causal models are often larger than this, and the consequences are more intricate, but an online app does this reasoning for us: see an example from <a href=\"https:\/\/academic.oup.com\/fampra\/article\/39\/3\/556\/6463006\">Family Practice<\/a>\u00a0 at <a href=\"http:\/\/dagitty.net\/m4TETpl\">http:\/\/dagitty.net\/m4TETpl<\/a> (model derived from some data analyses though).<\/p>\n<p><a href=\"#_ednref5\" name=\"_edn5\"><span>[v]<\/span><\/a> \u201cThe correlation between two variables can be shown to equal the sum of the products of the chains of path coefficients along all of the paths by which they are connected.\u201d [4] p. 329 &amp; Fig. 6<\/p>\n<p><a href=\"#_ednref6\" name=\"_edn6\"><span>[vi]<\/span><\/a> \u201cFrom the beginning, graphs have played an important role in representing the set of causal influences. The pioneering work of Wright (1921, 1934) has inspired the more recent developments of structural equation models (Joreskog, 1978) and graphical models (Dawid, 1979; Lauritzen and Wermuth, 1989; Cox and Wermuth, 1996). An approach using the modelling of \u2018potential outcome\u2019, which is often called the counterfactual approach, has been proposed in the context of clinical trials by Rubin (1974) and further studied by Holland (1986) among others. The counterfactual approach has been extended to the study of longitudinal incomplete data in several papers, the results of which have been gathered together by van der Laan and Robins (2002). Spirtes et al. (2000) and Pearl (2000) have developed the issue of investigating causality with graphical models.\u201d [7] p.719-20<\/p>\n<p><a href=\"#_ednref7\" name=\"_edn7\"><span>[vii]<\/span><\/a> One of the densest statistical quote I know of is \u201c\u201cNetworks employing Directed Acyclic Graphs (DAGs) have a long and rich tradition, starting with the geneticist Wright (1921[8]). He developed a method called path analysis [Wright, 1934[9]] which later on, became an established representation of causal models in economics [Wold, 1964[10]], sociology [Blalock, 1971[11]] and psychology [Duncan, 1975[12]]. Influence diagrams represent another application of DAG representation [Howard and Matheson, 1981[13]], [Shachter, 1988[14]] and [Smith, 1987[15]]. These were developed for decision analysis and contain both chance nodes and decision nodes (our definition of causal models excludes decision nodes). Recursive models is the name given to such networks by statisticians seeking meaningful and effective decompositions of contingency tables (Lauritzen, 1982[16]), (Wermuth &amp; Lauritzen, 1983[17]], [Kiiveri et al, 1984[18]]. Bayesian Belief Networks (or Causal Networks) is the name adopted for describing networks that perform evidential reasoning ((Pearl, 1986a[19], 1988[20]]). This paper establishes a clear semantics for these networks that might explain their wide usage as models for forecasting, decision analysis and evidential reasoning.\u201d [21] p.136<\/p>\n<p><a href=\"#_ednref8\" name=\"_edn8\"><span>[viii]<\/span><\/a> Note that Donald Rubin, who is credited with \u2018inventing propensity scores\u2019 does not cite his doctoral advisor\u2019s Cochran 1950 paper in Biometrika [22], where he first proposed the matching tool first; no mention in the book with Imbens [23]; I cite him in [24].<\/p>\n<p><em>References\u00a0<\/em><\/p>\n<ol>\n<li>Reichenbach, H., <em>The philosophy of space and time<\/em>. 1957: Courier Corporation.<\/li>\n<li>Kenny, D.A., <em>Statistics for the social and behavioral sciences. Posted by author at <\/em><a href=\"https:\/\/davidakenny.net\/doc\/statbook\/kenny87.pdf\"><em>https:\/\/davidakenny.net\/doc\/statbook\/kenny87.pdf<\/em><\/a>. 1987: Little, Brown Boston.<\/li>\n<li>Istre, G.R., et al., <em>Increasing the Use of Child Restraints in Motor Vehicles in a Hispanic Neighborhood.<\/em> American Journal of Public Health, 2002. <strong>92<\/strong>(7): p. 1096-1099.<\/li>\n<li>Wright, S., <em>The relative importance of heredity and environment in determining the piebald pattern of guinea-pigs.<\/em> Proceedings of the National Academy of Sciences, 1920. <strong>6<\/strong>(6): p. 320-332.<\/li>\n<li>Baron, R.M. and D.A. Kenny, <em>The moderator\u2013mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations.<\/em> Journal of Personality and Social Psychology, 1986. <strong>51<\/strong>(6): p. 1173-1182.<\/li>\n<li>Beasley, T.M., <em>Seemingly unrelated regression (SUR) models as a solution to path analytic models with correlated errors.<\/em> Multiple linear regression viewpoints, 2008. <strong>34<\/strong>(1): p. 1-7.<\/li>\n<li>Commenges, D. and A. G\u00e9gout-Petit, <em>A general dynamical statistical model with causal interpretation.<\/em> Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2009. <strong>71<\/strong>(3): p. 719-736.<\/li>\n<li>Wright, S., <em>Correlation and causation. Part I Method of path coefficients.<\/em> Journal of agricultural research, 1921. <strong>20<\/strong>(7): p. 557-585.<\/li>\n<li>Wright, S., <em>The Method of Path Coefficients <\/em><a href=\"https:\/\/drive.google.com\/file\/d\/1Iq3vEFZna4XXzjs-JIW9mnBu7dT-muBC\/view?usp=sharing\"><em>https:\/\/drive.google.com\/file\/d\/1Iq3vEFZna4XXzjs-JIW9mnBu7dT-muBC\/view?usp=sharing<\/em><\/a><em>.<\/em> The Annals of Mathematical Statistics, 1934. <strong>5<\/strong>(3): p. 161-215.<\/li>\n<li>Wold, H.O., <em>Econometric model building: essays on the causal chain approach<\/em>. 1964: North-Holland Publishing Company.<\/li>\n<li>Blalock, H.M., <em>Causal models in the social sciences<\/em>. 1985: Transaction Publishers.<\/li>\n<li>Duncan, O.D., <em>Introduction to structural equation models<\/em>. 1975, New. York: Academic Press.<\/li>\n<li>Howard, R.A. and J.E. Matheson, <em>Influence diagrams<\/em>, in <em>Readings on the Principles and Applications of Decision Analysis: General collection<\/em>, R.A. Howard, Editor. 1981, Strategic Decisions Group.<\/li>\n<li>Shachter, R.D., <em>Probabilistic inference and influence diagrams.<\/em> Operations research, 1988. <strong>36<\/strong>(4): p. 589-604.<\/li>\n<li>Smith, J.Q., <em>Influence diagrams for Bayesian decision analysis.<\/em> European journal of operational research, 1989. <strong>40<\/strong>(3): p. 363-376.<\/li>\n<li>Lauritzen, S.L., <em>Lectures on contingency tables<\/em>. University of Aalborg Press, Aalborg, Denmark. 1979: Inst. of mathematical statistics, University of Copenhagen.<\/li>\n<li>Wermuth, N. and S.L. Lauritzen, <em>Graphical and recursive models for contingency tables.<\/em> Biometrika, 1983. <strong>70<\/strong>(3): p. 537-522.<\/li>\n<li>Kiiveri, H., T.P. Speed, and J.B. Carlin, <em>Recursive causal models <\/em><a href=\"https:\/\/drive.google.com\/file\/d\/1EFYMJqWs2LrcAlcg4Z0hxIoyg7MQR1tw\/view?usp=sharing\"><em>https:\/\/drive.google.com\/file\/d\/1EFYMJqWs2LrcAlcg4Z0hxIoyg7MQR1tw\/view?usp=sharing<\/em><\/a><em>.<\/em> Journal of the australian Mathematical Society, 1984. <strong>36<\/strong>(1): p. 30-52.<\/li>\n<li>Pearl, J., <em>Fusion, propagation, and structuring in belief networks.<\/em> Artificial Intelligence, 1986. <strong>29<\/strong>(3): p. 241-288.<\/li>\n<li>Pearl, J., <em>Probabilistic reasoning in intelligent systems: networks of plausible inference <\/em><a href=\"https:\/\/drive.google.com\/file\/d\/1gYtsPNIoFolgrveDF7kjLawU11yISmdO\/view?usp=sharing\"><em>https:\/\/drive.google.com\/file\/d\/1gYtsPNIoFolgrveDF7kjLawU11yISmdO\/view?usp=sharing<\/em><\/a>. 1988: Morgan Kaufmann.<\/li>\n<li>Geiger, D. and J. Pearl, <em>On the logic of causal models arXiv preprint arXiv:1304.2355<\/em>, in <em>Machine Intelligence and Pattern Recognition <\/em>1990. p. 3-14.<\/li>\n<li>Cochran, W.G., <em>The comparison of percentages in matched samples <\/em><a href=\"https:\/\/drive.google.com\/file\/d\/1oqg2fU4RjcSq1LQULIzLtlUkiqBbhYe2\/view?usp=sharing\"><em>https:\/\/drive.google.com\/file\/d\/1oqg2fU4RjcSq1LQULIzLtlUkiqBbhYe2\/view?usp=sharing<\/em><\/a><em>.<\/em> Biometrika, 1950. <strong>37<\/strong>(3\/4): p. 256-266.<\/li>\n<li>Imbens, G.W. and D.B. Rubin, <em>Causal inference in statistics, social, and biomedical sciences<\/em>. 2015: Cambridge University Press.<\/li>\n<li>Coman, E., H. Wu, and S. Assari, <em>Exploring Causes of Depression and Anxiety Health Disparities (HD) by Examining Differences between 1:1 Matched Individuals.<\/em> Brain Sciences 2018. <strong>8<\/strong>(12): p. <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/30487396\/\">https:\/\/pubmed.ncbi.nlm.nih.gov\/30487396\/<\/a>.<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>(3). Tools to evaluate causality from different scientific domains ***Some preliminary ground-setting first; causality rests on contrary-to-fact assumptions: this \u201cwhat if\u2026?\u201d permeates all statistical modeling: the hypothesis testing procedure starts with a long list of \u2018if\u2019s, e.g. \u2018if patients are independent cases, if the sample was selected randomly, if the distribution of the outcome resembles [&hellip;]<\/p>\n","protected":false},"author":2514,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"acf":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-12 20:08:04","action":"change-status","newStatus":"draft","terms":[],"taxonomy":""},"_links":{"self":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/69"}],"collection":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/users\/2514"}],"replies":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/comments?post=69"}],"version-history":[{"count":5,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/69\/revisions"}],"predecessor-version":[{"id":114,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/69\/revisions\/114"}],"wp:attachment":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/media?parent=69"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}