Full Title: Assessing Certainty without Certainty: On the Use of Technical Tools for Assessing the Methodological Quality of Biomedical Research Data and its Role in the Emergence of Scientific Dissent
Since the mid-twentieth century, the biomedical information explosion has led to an ever-growing demand for systematic reviews that bring together the results of different studies and thereby keep track of the current state of knowledge. But the degree of certainty with which available data is actually informative of a treatment’s true intervention effect essentially depends on the way a study is designed and executed. Accordingly, standardized tools such as checklists and scales emerged to allow reviewers to objectively assess the quality of any given study. Soon, however, the biomedical research community began to realize that these tools failed to measure what they were intended to measure: the same tool applied by different reviewers to the same data would often lead to very different results. Even though this fundamental problem has not been resolved, these tools are used today more than ever.
As part of the “Practices of Validation in the Biomedical Sciences” Research Group, the aim of this project is to understand how exactly this particular practice of validation has been used, assessed, and revised over time and how the lack of interrater reliability threatens the collective production of biomedical knowledge. The project combines social and historical epistemology of science with a philosophy-in-medicine approach as its methods of inquiry. Special attention will be paid to the apparent involvement of subjectivity in the claims of certainty mediated through these tools and how this may account for the emergence of scientific dissent in biomedical research communities. Thereby, the investigation also intends to provide a new perspective on a central topic in the history and philosophy of science, namely on the role that research data plays in the evaluation of scientific hypotheses.