The truth wears off indeed
I see that the New Yorker has ungated Jonah Lehrer’s The Truth Wears Off, on how statistical significance seems to decline over time. Lehrer’s is one of those rare features with a punch line, so if you want to enjoy the piece fully you should read it before this post.
Lots of smart people have weighed in online. The best follow-up I’ve read so far is from Andrew Gelman, who links to his own piece from American Scientist. (H/t to Chris Blattman.) But as plenty of good has been said about the piece already, I’d like to point out the one thing about it that bugged me.
Early in the piece, Lehrer writes at length about Jonathan Schooler, a psych researcher who has found that he has difficulty reproducing his own results.
…while Schooler was publishing these results in highly reputable journals, a secret worry gnawed at him: it was proving difficult to replicate his earlier findings. “I’d often still see an effect, but the effect just wouldn’t be as strong,” he told me. “It was as if verbal overshadowing, my big new idea, was getting weaker.” At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research.
The central insight of the piece is that declining statistical significance in a field overall is one of the effects of publication bias. When an idea is new, strong positive results are required for publication. As it becomes accepted knowledge, contradictory results become interesting enough to publish. That’s a brilliant, important observation.
The problem is that Schooler’s experience can’t be explained by publication bias. He is just one researcher. If publication bias was the only force in play, individual researchers wouldn’t see their results change over time, only their chances of getting those results published.
Schooler’s problem is probably better described as regression to the mean following a few anomalous results, as the article acknowledges. (Though it is a pretty weird case, and I’d love to hear a better explanation, especially for his tests of the decline effect itself.) That’s a less important issue than publication bias.It’s not a problem with the scientific method if anomalous results are gradually disproven, but publication bias can twist our perception of the world over the long term.
If the piece is really raising questions with the scientific method, why do we read so much about Schooler? And why do transitions throughout the piece seem to relate his research to the decline effect in fields as a whole? I suspect it’s because even though it isn’t illustrative of the articles’ central point, his story is interesting.
This is the sort of compromise that writers and editors make during revision. Some characters seem too compelling to cut, even as inclusion confuses the point, as I believe it does in this case. But we’re supposed to be servants of the truth, not just great stories, and I expect better of the New Yorker. Maybe it’s an unrealistic standard.