Episode 4 - Reproducibility now

Published: July 31, 2018, 10:32 a.m.

Episode 4 - Reproducibility Now\n\nThis week we dive into the Open Science Collaboration\u2019s (2015) paper \u201cEstimating the reproducibility of psychological science\u201d\nhttp://science.sciencemag.org/content/349/6251/aac4716 \n\nHighlights:\n\n[1:00] This paper has all of the authors\n[1:30] Direct vs conceptual replications\n[4:30] PhD students running replications as the basis of extending a paradigm\n[6:00] The 100 studies paper methods in brief\n[8:00] everything\u2019s available for this collaborative effort, and that is awesome\n(https://osf.io/ezcuj)\n[9:00] Reproducibility vs Replicability - what are we actually talking about\n[9:30] Oxford summer school in reproducibiltiy \n(https://www.eventbrite.co.uk/e/oxford-reproducibility-school-tickets-48405892327)\n[11:00] paper discussing the computational reproducibility of papers \n[15:00] Replication is not only about the p value folks!\n[17:30] Sam brings up Bayes purely to be a douchebag\n[19:30] A bayesian approach - Sophia gives us the paper (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149794) and we move on\n[20:00] Replications as a method to diagnose problems in science. Are replications a viable problem solver?\n[24:00] Psychology is only a teenager really\n[26:00] If the original paper is trash, there\u2019s probably no need to replicate it. Maybe just burn it down?\n[27:00] Figure 1 - the average effect size halved in the replication attempt and most effects did not replicate. \n[31:30] Do the results hint at more than publication bias? Are other QRPs involved?\n[33:00] Comparing reproducibility across subfields of psychology. But, are these studies representative of an entire subfield\n[35:30] Does journal impact factor mean anything? \n[39:30] Are we actually being critical of previous research in general?\n[41:00] \u201cOur foundations have as many holes as a swiss cheese\u201d\n\nMusic credit: Kevin MacLeod - Funkeriffic\nfreepd.com/misc.php