{"id":104,"date":"2025-05-12T09:51:54","date_gmt":"2025-05-12T13:51:54","guid":{"rendered":"https:\/\/health.uconn.edu\/causality\/?page_id=104"},"modified":"2025-07-28T10:59:16","modified_gmt":"2025-07-28T14:59:16","slug":"errors","status":"publish","type":"page","link":"https:\/\/health.uconn.edu\/causality\/errors\/","title":{"rendered":"Types of errors and their effects on causality conclusions"},"content":{"rendered":"<p>*** There is fluid language and understanding of errors, and how they are handled, in medicine, social sciences, and more precise sciences like physics and engineering<a href=\"#_edn1\" name=\"_ednref1\"><span>[i]<\/span><\/a>. In its broadest sense, they represent imprecision, uncertainty, ambiguity of knowledge, of how the world works<a href=\"#_edn2\" name=\"_ednref2\"><span>[ii]<\/span><\/a>.<\/p>\n<p><strong>1.<\/strong> Types of errors and their explications<\/p>\n<p>*** Perhaps the first needed logically explanation is of \u2018standard error\u2019 (SE): this is one of the most confusing terms in statistics, being a \u2018misnomer\u2019 for several reasons: <strong>(1).<\/strong> It\u2019s not at all \u2018standard\u2019, much like the \u2018standard deviation\u2019 is not either, they are used to standardize the values of the variable or parameter they refer to: e.g. the z-score is a variable scaled such that it represents how many standard deviations above\/below a mean each value sits at; SEs are \u2018measured\u2019 in the units of the variable\/parameter they apply to; <strong>(2). <\/strong>It does not have a meaning in itself, until we add the \u2018target\u2019 quantity: SE of what? It can be \u2018SE of the mean\u2019, SE of a regression parameter\u2019, etc.; <strong>(3).<\/strong> Its \u2018error\u2019 part does carry the meaning of uncertainty of a quantity, but it starts carrying sense along its original value, so commonly this is written as some X (+\/-x) ; <strong>(4).<\/strong> The \u2018standardizing\u2019 involved in creating z-scores is the same one involved in applying a z-test to a statistical parameter: dividing its value by its SE, and deciding this way how many SE\u2019s away from 0 (the common null hypothesis spot: this can change though) the parameter sits; e.g. see the \u2018SE of the mean\u2019 (note that for small N\u2019s, the t-test is instead used, but z and t(df) tests are the same for large samples).<\/p>\n<p>*** A first setting the stage: statistics differs from \u2018pure\u2019 math\/arithmetic\/calculus in how strict\/sharp vs. \u2018loose\u2019 the equality\/equivalence relation is: we accept that 1 2; but statistics on the other hand uses modified relational rules, which allow one to sometimes declare<\/p>\n<p>1 2 (when 2 is an average<a href=\"#_edn3\" name=\"_ednref3\"><span>[iii]<\/span><\/a>\u00a0 falling less than two standard errors further away than (the average) 1). This adds a new source of error\/noise\/uncertainty, which makes statistics more challenging<a href=\"#_edn4\" name=\"_ednref4\"><span>[iv]<\/span><\/a> (i.e. \u2018at what juncture can we say 1 2?&#8217;)<a href=\"#_edn5\" name=\"_ednref5\"><span>[v]<\/span><\/a>.<\/p>\n<p>***The labeling entices some strong ideological debates, e.g. reactions to calling the unexplained variability in an outcome \u2018residual error\u2019 instead of the \u2018proper\u2019 disturbance<a href=\"#_edn6\" name=\"_ednref6\"><span>[vi]<\/span><\/a>. As with other statistical terminology, e.g. direct and indirect effects, scientific domain loyalties engender strong disagreements (the \u2018direct effect\u2019 is in reality itself a \u2018residual effect\u2019, it does not have ontological standing by itself, because every time a new mediator is \u2018added\u2019, this \u2018direct\u2019 effect changes: it\u2019s what\u2019s left as direct effect, \u2018as of now\u2019).<\/p>\n<p>*** From a longstanding tradition, errors are \u2018latent variables\u2019 or \u2018unobserved quantities\u2019, unobservable<a href=\"#_edn7\" name=\"_ednref7\"><span>[vii]<\/span><\/a> really (if you sense a tinge of \u2018potential outcomes\u2019 flavor here, yes, POs are handled as a form of latent\/missing \u2018thing\u2019, in SWIGS e.g. (<a href=\"https:\/\/csss.uw.edu\/research\/working-papers\/single-world-intervention-graphs-swigs-unification-counterfactual-and\">Single World Intervention Graphs<\/a>) they really are modeled like folks model the observed ones, with directional arrows between them and such).<\/p>\n<p>*** Economists call them \u2018error-contaminated\u2019 or mismeasured data [5], which rely on \u2018observable imperfect proxies\u2019 to get at the \u2018quantities that are unobservable\u2019\u00a0 [6].<\/p>\n<p>In computer-kind work, errors are \u2018noise\u2019 that need reduced to extract the \u2018signal\u2019<a href=\"#_edn8\" name=\"_ednref8\"><span>[viii]<\/span><\/a>; in cypher &amp; encryption worlds, the approach is from the opposite direction: how to scramble the signal so much that no algorithm can recover it from the \u2018noise\u2019.<\/p>\n<p><strong>2.<\/strong> Measurement imprecision and medical decisions<\/p>\n<p>*** \u2018Measurement\u2019 in medicine appears to have distinct challenges, and preferred language and tools. In contrast to measurement of psychological constructs (the domain of \u2018psychometrics\u2019<a href=\"#_edn9\" name=\"_ednref9\"><span>[ix]<\/span><\/a>), we can talk about \u2018biometrics\u2019.<\/p>\n<p>What we try to gauge (and wipe away) if the \u2018measurement error\u2019, or as it is called sometimes \u2018experimental error\u2019 [7] , or \u2018separate out\u2019 these 2 components: A1c<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<\/sub><\/strong><sub> <\/sub>= A1c<strong><em><sub>i<\/sub><\/em><sup> True<\/sup><\/strong> + Meas.Error<strong><sub>A1c<em>.i<\/em><\/sub><sup> Unobservable <\/sup><\/strong>where we mark the noise\u2019 as not measurable; it is a \u2018left over\u2019 part after we \u2018take out\u2019 the true value: not measurable directly<a href=\"#_edn10\" name=\"_ednref10\"><span>[x]<\/span><\/a>!<\/p>\n<p><strong>3.<\/strong> Modeling errors; how do errors trickle down\/not<\/p>\n<p>*** The basic setup in statistics invites explaining variability among persons in some medical outcomes, like A1c<a href=\"#_edn11\" name=\"_ednref11\"><span>[xi]<\/span><\/a>, by a \u2018predictor\u2019, say BMI, or modeling A1c = f(BMI). The functional form f is a fleeting quantity, and trying to find a value for is (as the g value in the gravitational attraction formula is), is not going to bear fruit, mostly because there are many more causes of one\u2019s A1c value than BMI: these \u2018many others\u2019 are grouped into the \u2018error term\u2019 in a linear regression model.<\/p>\n<p>&#8211; In physics e.g., there are formal ways in which uncertainties \u2018propagate\u2019 (see [9] \u201cwhen two numbers x and y are measured and the results are used to calculate the difference q = x &#8211; y. We found that the uncertainty in q is just the sum \u03b4q = \u03b4x + \u03b4y of the uncertainties in x and y.\u201d p. 45<a href=\"#_edn12\" name=\"_ednref12\"><span>[xii]<\/span><\/a>.<\/p>\n<p>Topics for later expansions:<\/p>\n<p><strong>+1. <\/strong>\u2018Latent\u2019\/true measures<a href=\"#_edn13\" name=\"_ednref13\"><span>[xiii]<\/span><\/a><\/p>\n<p><strong>+2. <\/strong>Imprecision and health disparities conclusions<\/p>\n<p><strong>+3. <\/strong>Tracing rule for following implications<\/p>\n<p>*** Bollen\u2019s MIIVsem procedure: why it illuminates<\/p>\n<p>*** What correlated errors imply<\/p>\n<p><span style=\"text-decoration: underline\"><strong>FOOTnotes<\/strong><\/span><\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><span>[i]<\/span><\/a> One can rank order sciences along how strong causal statements about the studied phenomena they can make, which can also be alternatively formulated as the \u2018percent error\u2019 left in their explicating theories; compare e.g. the A1c(BMI) functional relation in medicine, at one endpoint, to the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cobb%E2%80%93Douglas_production_function\">Cobb-Douglas production function <\/a>in economics (Y = A * K^\u03b1 * L^\u03b2, where Y is output, K is capital, L is labor, A is a constant representing total factor productivity, and \u03b1 and \u03b2 are parameters representing the share of output attributable to each input), and, at the other endpoint, to the \u2018pressure*volume ~= temperature&#8217; law of the \u2018ideal gas\u2019 in physics (or <a href=\"https:\/\/chem.libretexts.org\/Courses\/Widener_University\/Widener_University%3A_Chem_135\/05%3A_Gases\/5.02%3A_Relating_Pressure_Volume_Amount_and_Temperature_-_The_Ideal_Gas_Law\">chemistry<\/a>).<\/p>\n<p><a href=\"#_ednref2\" name=\"_edn2\"><span>[ii]<\/span><\/a> Uncertainty measures, from \u00a0<a href=\"https:\/\/bit.ly\/measure_hd\" class=\"broken_link\">https:\/\/bit.ly\/measure_hd<\/a><\/p>\n<table width=\"612\">\n<tbody>\n<tr>\n<td width=\"612\">1. Tolerance of Ambiguity scale (Budner 1962)\u00a0 &#8211; 16 items, Psychology (personality)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">2. AT-20 (MacDonald 1970) &#8211; 20 items, Psychology (decision making)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">3. MAT50 (Norton 1975)\u00a0 &#8211; 61 items, Psychology (personality)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">4. Physicians\u2019\u00a0 Reactions to Uncertainty scale (PRU) (Gerrity, DeVellis et al. 1990) &#8211; 22 items, Health care<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">5. Multiple Stimulus Types Ambiguity Tolerance Scale-I\u00a0 and \u2013II (MSTAT-I &amp; MSTAT-II) (McLain 1993, McLain 2009) &#8211; 22 items, Psychology (decision making)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">6. Tolerance For Ambiguity (TFA) scale (Geller, Tambor et al. 1993) &#8211; 7 items, Health care<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">7. Intolerance for Uncertainty Scale (IUS) French (Freeston, Rh\u00e9aume et al. 1994) English (Buhr and Dugas 2002) &#8211; 27 items, Psychology (clinical)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">8. Need For Closure Scale (NFCS) (Webster and Kruglanski 1994) &#8211; 47 items, Psychology (social)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">9.Attitudinal Ambiguity Tolerance scale (AAT) (Durrheim and Foster 1997) 45 items &#8211; Psychology (personality)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">10. Uncertainty Response Scale (Greco and Roger 2001) &#8211; 48 items, Psychology (personality)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">11. Intolerance of Uncertainty Scale short form (IUS-12) (Carleton, Norton et al. 2007) &#8211; 12 items, Psychology (clinical)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">12. Intolerance of Uncertainty Index (Carleton, Gosselin et al. 2010) &#8211; 30 items, Psychology (clinical)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">13. Intolerance of Uncertainty Scale for Children (IUSC) (Comer, Roy et al. 2009) &#8211; 27 items, Psychology (clinical)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">14. Ambiguity Aversion in Medicine (AA-Med) scale (Han, Reeve et al. 2009) &#8211; 6 items, Health care<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">15. Tolerance of Ambiguity Scale (Herman, Stevens et al. 2010) &#8211; 12 items, Psychology (organizational)<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">16. Dealing with Uncertainty Questionnaire (DUQ) (Schneider, Lowe et al. 2010) &#8211; 10 items, Health care<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">17. Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) (Hancock, Roberts et al. 2015) &#8211; 29 items, Health care<\/td>\n<\/tr>\n<tr>\n<td width=\"612\">18. Multidimensional Attitude toward Ambiguity Scale (MAAS) (Lauriola, Foschi et al. 2015) &#8211; 30 items, Psychology (decision making)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><a href=\"#_ednref3\" name=\"_edn3\"><span>[iii]<\/span><\/a> The \u2018typical\u2019 or average American\u2019 appears to be a fictional entity, statistically generated (the statement &#8220;The average American owns 1.02 cars&#8221; is &#8220;about the average American&#8221;, [1] p. 301), of which there are plenty, some which exist, other which don\u2019t: \u201cI intend to use the word exists so that it encompasses exactly those objects that orthodox philosophers hold to exist. In particular, it includes all the ordinary physical objects that we normally take to exist, and it does not include unicorns, gold mountains, winged horses, round squares (round square things), Pegasus, or Sherlock Holmes. The theory given below will say that there are unicorns, there is such a thing as Pegasus, etc., but that none of these exist. \u201c [2] p. 11 &amp; &#8220;If we forget or inhibit our philosophical training for the moment, we are all prepared to cite examples of nonexistent objects: Pegasus, Sherlock Holmes, unicorns, centaurs, . . . . Those are all possible objects, but we can find examples of impossible ones, too; Quine&#8217;s example of the round square cupola on Berkeley College will do. It is an impossible object, and it certainly doesn&#8217;t exist, so it seems to be an example of an impossible nonexistent object. With so many examples at hand, what is more natural than to conclude that there are nonexistent objects-lots of them [2], p. 2<\/p>\n<p><a href=\"#_ednref4\" name=\"_edn4\"><span>[iv]<\/span><\/a> The logic of hypothetical reasoning is an old topic in logic, [3] and is involved in both the classical \u2018hypothesis testing\u2019 scientific procedure, and the more modern causal inference advances, based on \u2018what-if\u2019 contrary-to-fact \u2018potential outcomes\u2019 reasoning.<\/p>\n<p><a href=\"#_ednref5\" name=\"_edn5\"><span>[v]<\/span><\/a> The very concept of \u2018=\u2019 is itself a debate topic, among those with philosophical inclinations.<\/p>\n<p><a href=\"#_ednref6\" name=\"_edn6\"><span>[vi]<\/span><\/a> Judea Pearl says \u201cu&#8217;s stand for omitted factors\u201d in [4]<\/p>\n<p><a href=\"#_ednref7\" name=\"_edn7\"><span>[vii]<\/span><\/a> \u201cHigher-dimensional sphere packings are hard to visualize, but they are eminently practical objects\u201d <a href=\"https:\/\/www.quantamagazine.org\/sphere-packing-solved-in-higher-dimensions-20160330\/\">https:\/\/www.quantamagazine.org\/sphere-packing-solved-in-higher-dimensions-20160330\/<\/a> Some things can be \u2018practical\u2019, even if not accessible to us (the field of spirituality is filled with such entities!).<\/p>\n<p><a href=\"#_ednref8\" name=\"_edn8\"><span>[viii]<\/span><\/a> Unexpected insights into this comes from niche mathematical problems, e.g. \u201cDense sphere packings are intimately related to the error-correcting codes used by cell phones, space probes and the Internet to send signals through noisy channels.\u201d <a href=\"https:\/\/www.quantamagazine.org\/sphere-packing-solved-in-higher-dimensions-20160330\/\">https:\/\/www.quantamagazine.org\/sphere-packing-solved-in-higher-dimensions-20160330\/<\/a> (seen from <a href=\"https:\/\/www.youtube.com\/watch?v=dr2sIoD7eeU\">https:\/\/www.youtube.com\/watch?v=dr2sIoD7eeU<\/a> \u00a0\u2018The things you&#8217;ll find in higher dimensions\u2019)<\/p>\n<p><a href=\"#_ednref9\" name=\"_edn9\"><span>[ix]<\/span><\/a> Note that some \u2018yardsticks\u2019 we devise are both bio~ and psych~metrics, like \u2018self rated health\u2019, which is a common 1 \u2018item\u2019\/question\/method health \u2018outcome\u2019 (per googleAI: \u201cIn general, would you say your health is excellent, very good, good, fair, or poor?&#8221;): it has usually 5 response options, but to appreciate the thickness of the brush used here, look at the 40 nuances of \u2018bad\u2019\/\u2019good\u2019 descriptors here: <a href=\"https:\/\/today.yougov.com\/society\/articles\/21717-how-good-good-1\">How good is \u201cgood\u201d?<\/a><\/p>\n<p><a href=\"#_ednref10\" name=\"_edn10\"><span>[x]<\/span><\/a> If you sense here a similarity to the simple linear prediction model commonly used, the linear regression, you are not the first; here, however we have a \u2018flipped\u2019 direction, and it becomes evident why this model is \u2018not identified\u2019 if we count the \u2018superscripts\u2019. In a regression, we would predict A1c from BMI, say, so we have<\/p>\n<p>A1c<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<\/sub><\/strong><sub> <\/sub>= A1c<strong><em><sub>i<\/sub><\/em><sup> True<\/sup><\/strong> + \u03b2<strong><sub> BMI-&gt;A1c<\/sub><\/strong>\u00b7 BMI<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<\/sub><\/strong> + Residual.Error<strong><sub>A1c<em>.i<\/em><\/sub><sup> Unobservable<\/sup><\/strong> which allows us to estimate \u03b2, and then even \u2018generate\u2019 the as a derivate of the analysis: simply the difference between the predicted value and the observed one! In this \u2018<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC4096146\/\"><em>classical test theory<\/em><\/a>\u2019 <strong><sub>\u00a0<\/sub><\/strong>view, A1c<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<\/sub><\/strong><sub> <\/sub>= A1c<strong><em><sub>i<\/sub><\/em><sup> True<\/sup><\/strong> + Meas.Error<strong><sub>A1c<em>.i<\/em><\/sub><sup> Unobservable<\/sup><\/strong>, we don\u2019t know where to \u2018split\u2019 the observed quantity, i.e. how much of \u2018noise\u2019 to carve out so we have its\u2019 true value left: what we need to estimate is this very \u2018strength of relation\u2019 coefficient, which in measurement (psychometric) parlance is the \u2018loading\u2019 \u03bb: A1c<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<\/sub><\/strong><sub> <\/sub>= \u03bb<strong><sub> True.A1c-&gt;A1c<\/sub><\/strong>\u00b7 A1c<strong><em><sub>i<\/sub><\/em><sup> True<\/sup><\/strong> + Meas.Error<strong><sub>A1c<em>.i<\/em><\/sub><sup> Unobservable<\/sup><\/strong><\/p>\n<p>*** One could \u2018run a model\u2019 with only 2 indicators, but in that case one needs some \u2018identifying\u2019 assumptions, i.e. to reduce the number of estimates, or increase the df = -1 to a df = 0: we can do that by forcing the 2 loadings to be equal: \u201cadditional restrictions are needed to accomplish model identification, such as indicator loading equality (true score\u2013equivalent measures) and\/or error variance equality (e.g., parallel measures; Lord &amp; Novick, 1968).\u201d [8] p. 231 (Bollen says in the context of 1 same measure, repeated: \u201cX<strong><sub>t<\/sub><\/strong> and X<strong><sub>t+1<\/sub><\/strong> are parallel measures \u03bb<strong><sub>t<\/sub><\/strong> = \u03bb<strong><sub>t + 1<\/sub><\/strong> = 1 and VAR(e<strong><sub>t<\/sub><\/strong> ) = VAR(e<strong><sub>t + 1<\/sub><\/strong>)] p. 201<\/p>\n<p>*** A simple \u2018tracing\u2019 exercise shows how and why in this case we need 3 such measured instances\u2019 of A1c per patient, to be able to \u2018get at\u2019 the trub value, say A1c measured with 3 types of devices (hence we have 3 such equations, indexed by j: A1c<strong><em><sub>i<\/sub><\/em><sup> <\/sup><sub>Measured<em>.j<\/em><\/sub><\/strong><sub> <\/sub>= \u03bb<strong><em><sub>j<\/sub><\/em><\/strong><strong><sub> True.A1c-&gt;A1c<em>.j<\/em><\/sub><\/strong> \u00b7 A1c<strong><em><sub>i<\/sub><\/em><sup> True<\/sup><em><sub>.j<\/sub><\/em><\/strong> + Meas.Error<strong><sub>A1c<em>.i<\/em><\/sub><sup> Unobservable<\/sup><em><sub>.j<\/sub><\/em><\/strong>, each A1c <strong><em><sub>j<\/sub><\/em><\/strong><sub> <\/sub><strong><em>\u00a0<\/em><\/strong>with its own \u2018imperfection\u2019. I show in the Excel online how to calculate these loadings by hand, which then can be used to calculate the \u2018composite reliability\u2019 (per Raykov\u2019s method: simply the percent of the total variability that is not \u2018noise\u2019 <a href=\"https:\/\/statisticseasily.com\/glossario\/what-is-composite-reliability-understanding-its-importance\/\">WWW e.g.<\/a>). Note that we can then talk about the reliability of each measurement instrument\/ measurement method \u03c1<strong><em><sub>A1cj<\/sub><\/em><\/strong>, and of the \u2018true\u2019 measure, the \u2018scale reliability\u2019 \u03c1<strong><em><sub>TrueA1c<\/sub><\/em><\/strong>.<\/p>\n<p><a href=\"#_ednref11\" name=\"_edn11\"><span>[xi]<\/span><\/a> Note that a model with only A1c in it, no predictors, is better represented as a A1c <strong><em><sub>i<\/sub><\/em><sup> <\/sup><\/strong>= Average(A1c<strong><em><sub> i<\/sub><\/em><\/strong>)\u00a0 + 1 \u00b7 u<strong><em><sub> i<\/sub><\/em><\/strong> (or simply u -&gt; A1c, which makes clearer that the entire variable is a \u2018bog fat error\u2019): this is an unappreciated option available in several statistical software: Stata e.g. can \u2018run\u2019 such a regression, without a predictor (code is simply <em>reg A1c<\/em> !!!) which will display no regression coefficient, of course, but only the \u2018intercept\u2019, i.e. the conditional mean, but having nothing to condition on\u2019, this one is the sample mean estimate, which will be accompanied by its \u2018standard error\u2019.<\/p>\n<p><a href=\"#_ednref12\" name=\"_edn12\"><span>[xii]<\/span><\/a> Note that, as with the note <strong><em><sup>i<\/sup><\/em><\/strong> above, the way physicists \u2018handle\u2019 error propagation is formally based on differentiating an equation, of the outcome, as a function of its \u2018predictors\u2019: one needs to know this \u2018law\u2019 to be abbe to take the partial derivatives with respect to each predictor one at a time. In most applications in medicine and social sciences, we simply assume additive linear relations.<\/p>\n<p><a href=\"#_ednref13\" name=\"_edn13\"><span>[xiii]<\/span><\/a> Econometricians use alternate language, not \u2018latent variables\u2019, but plain \u2018unobservables\u2019, and rarely make use of the graphical depiction, instead using equation-based functional relations like: \u201ctwo measurements, X and Y, are produced by mutually independent unobservables, U, V, and W, through the system, X = g(U,V) and Y = h(U,W)\u201d [10] p. 1, for the model U + V -&gt; X &amp; U + W =&gt; Y; on this Cam Mcintosh keeps us all on our tiptoes <a href=\"https:\/\/listserv.ua.edu\/cgi-bin\/wa?A2=SEMNET;78022a2a.2505\">on SEMNET<\/a> ( some references he suggested, e.g. [5, 10-12])<\/p>\n<p><em>References<\/em><\/p>\n<ol>\n<li>Van Inwagen, P., <em>Creatures of fiction.<\/em> American philosophical quarterly, 1977. <strong>14<\/strong>(4): p. 299-308.<\/li>\n<li>McCann, H.J., <em>Creation and the Sovereignty of God <\/em><a href=\"https:\/\/drive.google.com\/file\/d\/1809fTVLp3sGooYLS7LfU-O2mWbimwXHP\/view?usp=sharing\"><em>https:\/\/drive.google.com\/file\/d\/1809fTVLp3sGooYLS7LfU-O2mWbimwXHP\/view?usp=sharing<\/em><\/a>. 2012: Indiana University Press.<\/li>\n<li>Rescher, N., <em>Hypothetical reasoning <\/em>Studies in logic and the foundations of mathematics. 1964, Amsterdam: North-Holland Pub. Co. Amsterdam.<\/li>\n<li>Pearl, J., <em>Causal Diagrams &#8211; a threat to correctness<\/em>. 01\/12\/2012.<\/li>\n<li>Schennach, S.M., <em>Recent Advances in the Measurement Error Literature.<\/em> Annual Review of Economics, 2016. <strong>8<\/strong>(Volume 8, 2016): p. 341-377.<\/li>\n<li>Schennach, S., <em>Measurement systems.<\/em> Journal of Economic Literature, 2022. <strong>60<\/strong>(4): p. 1223-1263.<\/li>\n<li>Altman, D.G. and J.M. Bland, <em>Measurement in medicine: the analysis of method comparison studies.<\/em> Journal of the Royal Statistical Society Series D: The Statistician, 1983. <strong>32<\/strong>(3): p. 307-317.<\/li>\n<li>Raykov, T., <em>Evaluation of Scale Reliability for Unidimensional Measures Using Latent Variable Modeling.<\/em> Measurement and Evaluation in Counseling and Development, 2009. <strong>42<\/strong>(3): p. 223.<\/li>\n<li>Taylor, J., <em>Introduction to error analysis, the study of uncertainties in physical measurements<\/em>. 1997.<\/li>\n<li>Hu, Y. and Y. Sasaki, <em>Identification of paired nonseparable measurement error models.<\/em> Econometric Theory, 2016. <strong>33<\/strong>(4): p. 955-979.<\/li>\n<li>Cunha, F., J.J. Heckman, and S.M. Schennach, <em>Estimating the Technology of Cognitive and Noncognitive Skill Formation.<\/em> Econometrica, 2010. <strong>78<\/strong>(3): p. 883-931.<\/li>\n<li>Zheng, Y., et al., <em>Nonparametric Factor Analysis and Beyond.<\/em> arXiv preprint arXiv:2503.16865, 2025.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>*** There is fluid language and understanding of errors, and how they are handled, in medicine, social sciences, and more precise sciences like physics and engineering[i]. In its broadest sense, they represent imprecision, uncertainty, ambiguity of knowledge, of how the world works[ii]. 1. Types of errors and their explications *** Perhaps the first needed logically [&hellip;]<\/p>\n","protected":false},"author":2514,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"acf":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-11 12:59:26","action":"change-status","newStatus":"draft","terms":[],"taxonomy":""},"_links":{"self":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/104"}],"collection":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/users\/2514"}],"replies":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/comments?post=104"}],"version-history":[{"count":3,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/104\/revisions"}],"predecessor-version":[{"id":113,"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/pages\/104\/revisions\/113"}],"wp:attachment":[{"href":"https:\/\/health.uconn.edu\/causality\/wp-json\/wp\/v2\/media?parent=104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}