Explore open access research and scholarly works from NERC Open Research Archive

Advanced Search

Towards a unified approach to formal “risk of bias” assessments for causal and descriptive inference

Pescott, O.L. ORCID: https://orcid.org/0000-0002-0685-8046; Boyd, R.J. ORCID: https://orcid.org/0000-0002-7973-9865; Powney, G.D. ORCID: https://orcid.org/0000-0003-3313-7786; Stewart, G.B.. 2026 Towards a unified approach to formal “risk of bias” assessments for causal and descriptive inference. Quality & Quantity. 16, pp. 10.1007/s11135-026-02687-0

Abstract

Statistics is sometimes described as the science of reasoning under uncertainty. Statistical models provide one view of this uncertainty, but what is frequently neglected is the “invisible” portion of uncertainty: that assumed not to exist once a model has been fitted to some data. Systematic errors, i.e. bias, in data relative to some model and inferential goal can seriously undermine research conclusions, and qualitative and quantitative techniques have been created across several disciplines to quantify and generally appraise such potential biases. Perhaps best known are so-called “risk of bias” assessment instruments used to investigate the likely quality of randomised controlled trials in medical research. However, the logic of assessing the risks caused by various types of systematic error to statistical arguments applies far more widely. This logic applies even when statistical adjustment strategies for potential biases are used, as these frequently make assumptions (e.g. data “missing at random”) that can rarely be empirically guaranteed. Mounting concern about such situations can be seen in the increasing calls for greater consideration of biases caused by nonprobability sampling in descriptive inference (e.g. in survey sampling), and the statistical generalisability of in-sample causal effect estimates in causal inference. Both of these relate to the consideration of model-based and wider uncertainty when presenting research conclusions from models. Given that model-based adjustments are never perfect, we argue that qualitative risk of bias reporting frameworks for both descriptive and causal inferential arguments should be further developed and made mandatory by journals and funders. It is only through clear statements of the limits to statistical arguments that consumers of research can fully judge their value for any given application.

Documents
541512:273828
[thumbnail of s11135-026-02687-0.pdf]
Preview
s11135-026-02687-0.pdf - Published Version
Available under License Creative Commons Attribution 4.0.

Download (1MB) | Preview
Information
Library
Statistics

Downloads per month over past year

More statistics for this item...

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email
View Item