podcast

P-Values: Are we using a flawed statistical tool?

22.09.2025
Listen to the episode on your favorite platforms:
  • Apple Podcasts
  • Youtube
  • Spotify
  • Castbox
  • Pocket Casts
  • iHeart
  • Overcast
  • Castro
  • RadioPublic

P-values show up in almost every scientific paper, yet they’re one of the most misunderstood ideas in statistics. In this episode, we break from our usual journal-club format to unpack what a p-value really is, why researchers have fought about it for a century, and how that famous 0.05 cutoff became enshrined in science. Along the way, we share stories from our own papers—from a Nature feature that helped reshape the debate to a statistical sleuthing project that uncovered a faulty method in sports science. The result: a behind-the-scenes look at how one statistical tool has shaped the culture of science itself.

Statistical topics

  • Bayesian statistics
  • Confidence intervals 
  • Effect size vs. statistical significance
  • Fisher’s conception of p-values
  • Frequentist perspective
  • Magnitude-Based Inference (MBI)
  • Multiple testing / multiple comparisons
  • Neyman-Pearson hypothesis testing framework
  • P-hacking
  • Posterior probabilities
  • Preregistration and registered reports
  • Prior probabilities
  • P-values
  • Researcher degrees of freedom
  • Significance thresholds (p < 0.05)
  • Simulation-based inference
  • Statistical power 
  • Statistical significance
  • Transparency in research 
  • Type I error (false positive)
  • Type II error (false negative)
  • Winner’s Curse

Methodological morals

  • “​​If p-values tell us the probability the null is true, then octopuses are psychic.”
  • “Statistical tools don't fool us, blind faith in them does.”

References

Kristin and Regina’s online courses: 

Demystifying Data: A Modern Approach to Statistical Understanding  

Clinical Trials: Design, Strategy, and Analysis 

Medical Statistics Certificate Program  

Writing in the Sciences 

Epidemiology and Clinical Research Graduate Certificate Program 

Programs that we teach in:

Epidemiology and Clinical Research Graduate Certificate Program 

Find us on:

Kristin -  LinkedIn & Twitter/X

Regina - LinkedIn & ReginaNuzzo.com

  • () - Intro & claim of the episode
  • () - Why p-values matter in science
  • () - What is a p-value? (ESP guessing game)
  • () - Big vs. small p-values (psychic octopus example)
  • () - Significance thresholds and the 0.05 rule
  • () - Regina’s Nature paper on p-values
  • () - Misconceptions about p-values
  • () - Fisher vs. Neyman-Pearson (history & feud)
  • () - Botox analogy and type I vs. type II errors
  • () - Dating app analogies for false positives/negatives
  • () - How the 0.05 cutoff got enshrined
  • () - Misinterpretations: statistical vs. practical significance
  • () - Effect size, sample size, and “statistically discernible”
  • () - P-hacking and researcher degrees of freedom
  • () - Transparency, preregistration, and open science
  • () - The 0.05 cutoff trap (p = 0.049 vs 0.051)
  • () - The biggest misinterpretation: what p-values actually mean
  • () - Paul the psychic octopus (worked example)
  • () - Why Bayesian statistics differ
  • () - Why aren’t we all Bayesian? (probability wars)
  • () - The ASA p-value statement (behind the scenes)
  • () - Key principles from the ASA white paper
  • () - Wrapping up Regina’s paper
  • () - Kristin’s paper on sports science (MBI)
  • () - What MBI is and how it spread
  • () - How Kristin got pulled in (Christie Aschwanden & FiveThirtyEight)
  • () - Critiques of MBI and “Bayesian monster” rebuttal
  • () - Spreadsheet autopsies (Welsh & Knight)
  • () - Cherry juice example (why MBI misleads)
  • () - Rebuttals and smoke & mirrors from MBI advocates
  • () - Winner’s Curse and small samples
  • () - Twitter fights & “establishment statistician”
  • () - Cult-like following & Matrix red pill analogy
  • () - Wrap-up