Compensate Professors with Replication Options
Financial markets for binary prediction options which pay upon successful replication would provide insight into chances of a finding being true. Professors could be paid with these options.
In academia, some fields allow the mind to wonder largely unconstrained from reality, like literature, but many fields attempt to say something true about the world. Unfortunately, the replication crisis—the problem that results of many scientific studies don’t replicate—has revealed that there are many fields in which false findings are prevalent. Gwern Branwen has an excellent article on the replication crisis in which he explains the causes:
The crisis is caused by methods & publishing procedures which interpret random noise as important results, far too small datasets, selective analysis by an analyst trying to reach expected/desired results, publication bias, poor implementation of existing best-practices, nontrivial levels of research fraud, software errors, philosophical beliefs among researchers that false positives are acceptable, neglect of known confounding like genetics, and skewed incentives (financial & professional) to publish ‘hot’ results.
Researchers’ can increase the chance of their study replicating, but this requires more effort and might result in fewer and less impressive publications. A huge culprit of the replication crisis is a bad statistical practice known as p-hacking. When researchers want to evaluate a hypothesis, they formulate a null hypothesis and an alternative hypothesis. Researchers will use the available data to form a test statistic, and they use a p-value to determine the chance of getting a test statistic as extreme or more extreme than what they got if the null hypothesis is true. The researcher would then use a threshold to determine if their result is statistically significant. The usual threshold is p<0.05.
A researcher who wants to get a significant result with a very large dataset could just run tests on all sorts of different hypotheses and get a significant result even if the relationship is spurious. Researchers also engage in other fraudulent practices like tampering with data. All of the above are very bad and undermine the validity of entire fields of science.
Some have recommended preregistering studies before they are even started to prevent bad statistical practices. Another approach is open research, in which data and methods are made available for replication attempts and error checking. I think this is a step in the right direction.
I have another proposal that would create more of a financial incentive. It’s not going to happen anytime soon, but it is worth considering. I will call it compensation in replication options. The idea is to pay professors at research universities in binary options which can be exchanged on a secondary market. The option will expire when the study is replicated. If the results reach the same conclusion, the option pays $1. Otherwise, the payout is $0.
Professors often search for novel ideas that are very interesting, amusing, or revolutionary to the discipline. It is much easier to get these results if you engage in bad statistical practices. People who know about the replication crisis are skeptical when they hear about seemingly implausible conclusions, such as the idea of posing in a powerful stance increasing your testosterone level. It might be the case that laypeople can guess whether a study will replicate relatively well, as this study found.
If we had an open replication market for replication options in which everyone could participate, we would see pricing that approached the actual replication probability. Intelligent investors familiar with the research could successfully arbitrage any systemic biases. My thought is that professors who publish studies that sound initially implausible will see their options fall drastically in price. If the professor was confident in their methods and rigor, they could purchase more discounted options and profit from their confidence.
The price of the option would approximate the probability of the study replicating. There would have to be finer details on who can replicate and what counts as successful replication. We don’t want to incentivize flawed research or create perverse incentives.
You could form all sorts of financial derivates on top of the options. You could create options that pay $1 if it fails to replicate. You could bundle many options for specific hypotheses or buy options to be preemptively long or short on studies with a certain coauthor, institution, or field of research. Those who believe that the market is undercounting them could bet on the quality of their work, institution, or field.
Nit: Philopsophy is *supposed* to reflect the real world (but I agree it doesn't; one of my blogs: https://mugwumpery.com/?p=587).
I've worked in research biology for the last 15 years, this is my field nowadays.
The replication crisis is real; we see less than 20% of published papers replicate when we try (and since I'm pseudonymous - we're a really good lab).
Almost 10 years ago I proposed "Project Popper", a web-based mechanism for assigning scientists reputation scores based on replication of their work (modeled on eBay and Stack Exchange reputations; I ran it past Robin Hanson, who didn't seem impressed).
A prediction market is a very plausible solution, but there are difficulties.
Salaries based solely on replication would incent "trivially true" papers - work that doesn't present anything really new or useful, but is pretty sure to replicate. Maybe journals would push back against publishing such.
A larger problem is how to incentivize replication attempts. This seems to be the core of the problem - good journals want to publish work that's new and surprising and important. Replications don't get published in good journals, but papers in good journals are what makes a career.
Another issue how to judge replication results - when replication fails the original authors are likely to claim the replicators screwed up.