029. Replicability of medical studies

A discussion on the significance of our results


Related recent events

Harvard and the Brigham call for more than 30 retractions of cardiac stem cell research

He Promised to Restore Damaged Hearts. Harvard Says His Lab Fabricated Research.


Questions to ponder

  1. Must (medical) science be replicable?
  2. Pashler and Harris address three (3) general arguments made against the replicability crisis in science:
  3. The adoption of a low alpha level (e.g., 5%) puts reasonable bounds on the rate at which errors can enter the published literature, making false-positive effects rare enough to be considered a minor issue;
  4. Though direct replication attempts are uncommon, conceptual replication attempts are common—providing an even better test of the validity of a phenomenon; and
  5. Errors will eventually be pruned out of the literature if the field would just show a bit of patience.
  6. Do you believe the mechanisms currently in place are sufficiently self-correcting or should something be done to compensate for possible inadequacy?
  7. As Begley and Ioannidis point out, “The estimates for [scientific[ irreproducibility based on [] empirical observations range from 75% to 90%. These estimates fit remarkably well with estimates of 85% for the proportion of biomedical research that is wasted at-large.” If so much of our time and efforts are wasted, why put any (or much) of our time/effort into these endeavors?
  8. The cost of medical care has ballooned to over $10,000 per person (~3.2 trillion, 16.9% U.S. GDP), the average life expectancy in the United States has declined year-over-year, and medical technologies – rather than decreasing in cost with scale and history – seem to get more expensive by the day (note the 700% increase in an EpiPen over the past decade). All the to ask, is it (at) all worth it?
  9. The rate of positive results in psychological science (as in many biomedical fields) hovers between 90% to 100%, giving the (false) impression that 90% to 100% of the experiments yield such results. Given that most ends in failure, should we publish negative results? Should they get the same space on the page?
  10. Have you noticed that you get invitations to a lot of junk journals? How can we address that scourge?
  11. The Open Science Collaboration, in attempting to replicate the results “100 experimental and correlational studies published in [] psychology journals”, found that “[a] large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes”. Will there always be the selective bias for “better than average” when publishing that can only be routed out via regression to the mean via replication?
  12. How can we incentivize (and possibly fund) medical/scientific reproduction?
  13. Should taxpayers have to pay to repeat experiments? How many times?

Essays of possible interest

  1. Reproducibility in science
  2. Estimating the reproducibility of psychological science
  3. Is the replicability crisis overblown?
  4. How many scientists fabricate and falsify research?