Sheffield University held its first ReproducibiliTea meeting in October 2019, with about 15 people attending from medicine, biology, engineering, psychology, and other departments. Stavrina Dimosthenous, a PhD researcher in materials science, lead the discussion and introduced us to a psychology paper by Pashler and Harris, 2012: Is the Replicability Crisis Overblown? Three Arguments Examined. We discussed what p-values are, what p-hacking is, and how p-hacking can arise when researchers exclude data that doesn’t support their hypothesis.
Our discussion of hypothesis testing led us to think about what happens when a researcher finds a negative result, i.e. when the data does not support the hypothesis. In our meeting, some people had seen negative outcomes of clinical trials published in “high impact” journals, because these negative outcomes risked harming patients. But inconclusive results, when they show no clear outcome either way, often don’t get published. Despite these gaps in the publication record, some journal club attendees with clinical research experience felt that, in general, experiments were well-designed and publication quality was high. In other areas where research can be less systematic and more exploratory, such as in computational sciences and engineering, some attendees thought publication quality was more variable, and questioned whether journals and peer reviewers had sufficient incentive (or time) to ensure high quality.
The next Sheffield ReproducibiliTea is at 1pm 14th November, Pam Liversidge E06, where we’ll be covering a recent opinion piece by Daniele Fanelli, 2018: Is science really facing a reproducibility crisis, and do we need it to? Log into your Sheffield University account, and you can join our Google group and subscribe to our Google calendar. See you there!