Over at the Guardian, Ananyo Bhattacharya, the chief online editor of Nature, answers some common criticisms that scientists have of science journalism. His piece, called “Nine ways scientists demonstrate that they don’t understand journalism,” is pretty tame, though, and I think a lot of us would agree that science journalists must write their stories using certain conventions. Bhattacharya defends the following conventions that, he says, are criticized by scientists (go to his piece to see some others):
- Starting a story with the important results
- Using limited space because of readers’ limited attention spans
- Using headlines that will draw attention to the study
- Quoting scientists who disagree with the highlighted research
The story didn’t contain this or that “essential” caveat.
Was the caveat really essential to someone’s understanding of the story? Are you sure? In my experience, it’s rare that it is. Research papers contain all the caveats that are essential for a complete understanding of the science. They are also seldom read. Even by scientists.
Yes, journalists don’t need to put in every caveat that we’re required to add in the discussion, but some of them are important. Take the use of limited sample sizes to demonstrate the existence of “gay genes” or “depression genes” for example, or the fact that early reports of these genes (later found to be bogus) were limited to single lineages, or used associated markers that were reported by the press to be the genes themselves. These are important problems, not trivial caveats. And the caveats weren’t seen in most of the breathless news stories about “genes for gayness” of “genes for depression.”
Second, highlighting potential problems brings home to the reader that science is an ongoing enterprise, that no study is perfect, and, most important, all scientific truths are provisional. Too many journalists accepted the “arsenic bacteria” story, or the existence of the Darwinius masillae fossil as a missing link between the two major groups of primates. A finding can be wrong, or can be revised.
Why aren’t such caveats, or such dissent, presented more often? Well, yes, they could bog down a story, but often I think that journalists aren’t sufficiently trained in science to recognize when a problem is serious. Also, though Bhattacharya rightly emphasizes the need for science journalists to summon dissenting voices in their stories, many journalists are either too lazy to do this or don’t know who to call. There are some notable counterexamples. Carl Zimmer does a good job of this at The New York Times, and Faye Flam at The Philadelphia Inquirer. When reporting a new discovery, scientists should routinely search for dissent, and should know enough to determine whether that dissent is significant.
So my main complaint about science journalists is fourfold. First, they often aren’t trained sufficiently to write about science in a meaningful way. It would be nice if the journalist had a degree in the subject described, preferably an advanced degree. A journalist should be able to read the paper under consideration and understand it well.
Second, lazy science journlists often just reproduce press releases produced by universities instead of reading a paper and dissecting it themselves. Press releases are not journalism, but puffery.
Third, science journalists are often too lazy to do a proper job of vetting a story (this is related to the preceding beef).
Fourth, journalists often don’t seek out dissent, or make do with a token and meaningless dissent.
Readers: what are your complaints about science journalism? Who, in particular, doing you think is doing a really good job or a really crappy job?