Seth Mnookin on the current troubling state of science reporting

Seth Mnookin over at PLOS has an excellent piece on the present state of science writing, worth reading in full. Here are some highlights:

Lest anyone think that the ENCODE case was sui generis, just this past Wednesday, a team of researchers based in France published a paper in PLOS ONE titled “Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder.” (The paper’s authors were intentionally evoking the title of John P. A. Ioannidis’s groundbreaking 2005 piece, “Why most published research findings are false,” which built off of his earlier JAMA paper, “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research.”) After examining every newspaper report about the ten most covered research papers on ADHD from the 1990s, the authors were able to provide empirical evidence for a troubling phenomenon that seems to be all but baked in to the way our scientific culture operates: We pay lots of attention to things that are almost assuredly not true.

The first cited article above (by Gonon, Konsman, Cohen, and Boraud) examined 47 scientific publications on ADHD from the 1990s, “echoed by 347 newspaper articles.” The authors selected the 10 most echoed publications, tracked all subsequent research relating to them up to 2011, and also followed newspaper coverage of this research. Here were their results:

Seven of these ten publications were initial studies; three were not. Of the seven, “the conclusions in six of them were either refuted or strongly attenuated subsequently. The seventh was not confirmed or refuted, but its main conclusion appears unlikely.” Of the three, one of the findings has been attenuated. (In other words, eight out of ten of these studies, 80%, cannot be confirmed as originally stated, and 60% have already been either refuted or “strongly attenuated.”)

Now, how did newspapers report on all this? “The newspaper coverage of the “top 10” publications (223 articles) was much larger than that of the 67 related studies (57 articles). Moreover, only one of the latter newspaper articles reported that the corresponding “top 10” finding had been attenuated.”

Put another way, newspapers paid pretty much exactly FOUR times as much attention to the original studies–80% of which have either been subsequently refuted or unconfirmable as stated–than to the related studies. And a grand total of one out of 57 articles on the related studies even bothers to mention that the original finding cannot be confirmed as stated!

Why is this the case? Mnookin suggests:

Because it’s sexier to discover something than to show there’s nothing to be discovered, high-impact journals show a marked preference for “initial studies” as opposed to disconfirmations. Unfortunately, as anyone who has ever worked in a research lab knows, initial observations are almost inevitably refuted or heavily attenuated by future studies–and that data tends to get printed in less prestigious journals. Newspapers, meanwhile, give lots of attention to those first, eye-catching results while spilling very little (if any) ink on the ongoing research that shows why people shouldn’t have gotten all hot and bothered in the first place… The result? “[A]n almost complete amnesia in the newspaper coverage of biomedical findings.”

He goes on to summarize: “…publications that should be exemplars of nuanced, high-quality reporting are allowing confused speculation to clutter their pages; researchers and PIOs are nudging reporters towards overblown interpretations; and everything we write about will probably end up being wrong anyway — not that we’ll bother to let you know when the time comes.”