Replication games: how to make reproducibility research more systematic
Abel Brodeur, Anna Dreber, Fernando Hoces de la Guardia & Edward Miguel
In some areas of social science, around half of studies can’t be replicated. A new test-fast, fail-fast initiative aims to show what research is hot — and what’s not.
In October last year, one of us (A.B.) decided to run an ad hoc workshop at a research centre in Oslo, to try to replicate papers from economics journals. Instead of the handful of locals who were expected to attend, 70 people from across Europe signed up. The message was clear: researchers want to replicate studies.
Replication is sorely needed. In areas of the social sciences, such as economics, philosophy and psychology, some studies suggest that between 35% and 70% of published results cannot be replicated when tested with new data1–4. Often, researchers cannot even reproduce results when using the same data and code as the original paper, because key information is missing.
Yet most journals will not publish a replication unless it refutes an impactful paper. In economics, less than 1% of papers published in the top 50 journals between 2010 and 2020 were some type of replication5. That suggests that many studies with errors are going undetected.
After the Oslo workshop, we decided to try to make replication efforts in our fields of economics and political science more systematic. Our virtual, non-profit organization, the Institute for Replication, now holds one-day workshops — called replication games — to validate studies.
Since October 2022, we’ve hosted 12 workshops across Europe, North America and Australia, with 3 more scheduled this year. Each workshop has typically involved around 65 researchers in teams of 3–5 people, re-analysing about 15 papers. The teams either try to replicate papers, by generating new data and testing hypotheses afresh, or attempt to reproduce them, by testing whether the results hold if the published data are re-analysed. For many papers in our fields of study, in which the reproduction of results often involves re-running computer codes, it’s possible to do much of this work in a single day (see ‘A typical replication games project’). Each team’s findings are released as a preprint report, and these reports will be collated and published each year as a meta-paper.