what the arsenic effect means for scientific publishing

I don’t know very much about DNA (and by ‘not very much’ I sadly mean ‘next to nothing’), so when someone tells me that life as we know it generally doesn’t use arsenic to make DNA, and that it’s a big deal to find a bacterium that does, I’m willing to believe them. So too, apparently, are at least two or three reviewers for Science, which published a paper last week by a NASA group purporting to demonstrate exactly that.

Turns out the paper might have a few holes. In the last few days, the blogosphere has reached fever delirium pitch as critiques of the article have emerged from every corner; it seems like pretty much everyone with some knowledge of the science in question is unhappy about the paper. Since I’m not in any position to critique the article myself, I’ll take Carl Zimmer’s word for it in Slate yesterday:

Was this merely a case of a few isolated cranks? To find out, I reached out to a dozen experts on Monday. Almost unanimously, they think the NASA scientists have failed to make their case.  “It would be really cool if such a bug existed,” said San Diego State University’s Forest Rohwer, a microbiologist who looks for new species of bacteria and viruses in coral reefs. But, he added, “none of the arguments are very convincing on their own.” That was about as positive as the critics could get. “This paper should not have been published,” said Shelley Copley of the University of Colorado.

Zimmer then follows his Slate piece up with a blog post today in which he provides 13 experts’ unadulterated comments. While there are one or two (somewhat) positive reviews, the consensus clearly seems to be that the Science paper is (very) bad science.

Of course, scientists (yes, even Science reviewers) do occasionally make mistakes, so if we’re being charitable about it, we might chalk it up to human error (though some of the critiques suggest that these are elementary problems that could have been very easily addressed, so it’s possible there’s some disingenuousness involved). But what many bloggers (1, 2, 3, etc.) have found particularly inexcusable is the way NASA and the research team have handled the criticism. Zimmer again, in Slate:

I asked two of the authors of the study if they wanted to respond to the criticism of their paper. Both politely declined by email.

“We cannot indiscriminately wade into a media forum for debate at this time,” declared senior author Ronald Oremland of the U.S. Geological Survey. “If we are wrong, then other scientists should be motivated to reproduce our findings. If we are right (and I am strongly convinced that we are) our competitors will agree and help to advance our understanding of this phenomenon. I am eager for them to do so.”

“Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated,” wrote Felisa Wolfe-Simon of the NASA Astrobiology Institute. “The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.”

A NASA spokesperson basically reiterated this point of view, indicating that NASA scientists weren’t going to respond to criticism of their work unless that criticism appeared in, you know, a respectable, peer-reviewed outlet. (Fortunately, at least one of the critics already has a draft letter to Science up on her blog.)

I don’t think it’s surprising that people who spend much of their free time blogging about science, and think it’s important to discuss scientific issues in a public venue, generally aren’t going to like being told that science blogging isn’t a legitimate form of scientific discourse. Especially considering that the critics here aren’t laypeople without scientific training; they’re well-respected scientists with areas of expertise that are directly relevant to the paper. In this case, dismissing trenchant criticism because it’s on the web rather than in a peer-reviewed journal seems kind of like telling someone who’s screaming at you that your house is on fire that you’re not going to listen to them until they adopt a more polite tone. It just seems counterproductive.

That said, I personally don’t think we should take the NASA team’s statements at face value. I very much doubt that what the NASA researchers are saying really reflect any deep philosophical view about the role of blogs in scientific discourse; it’s much more likely that they’re simply trying to buy some time while they figure out how to respond. On the face of it, they have a choice between two lousy options: either ignore the criticism entirely, which would be antithetical to the scientific process and would look very bad, or address it head-on–which, judging by the vociferousness and near-unanimity of the commentators, is probably going to be a losing battle. Shifting the terms of the debate by insisting on responding only in a peer-reviewed venue doesn’t really change anything, but it does buy the authors two or three weeks. And two or three weeks is worth like, forty attentional cycles in the blogosphere.

Mind you, I’m not saying we should sympathize with the NASA researchers just because they’re in a tough position. I think one of the main reasons the story’s attracted so much attention is precisely because people see it as a case of justice being served. The NASA team called a major press conference ahead of the paper’s publication, published its results in one of the world’s most prestigious science journals, and yet apparently failed to run relatively basic experimental controls in support of its conclusions. If the critics are to be believed, the NASA researchers are either disingenuous or incompetent; either way, we shouldn’t feel sorry for them.

What I do think this episode shows is that the rules of scientific publishing have fundamentally changed in the last few years–and largely for the better. I haven’t been doing science for very long, but even in the halcyon days of 2003, when I started graduate school, science blogging was practically nonexistent, and the main way you’d find out what other people thought about an influential new paper was by talking to people you knew at conferences (which could take several months) or waiting for critiques or replication failures to emerge in other peer-reviewed journals (which could take years). That kind of delay between publication and evaluation is disastrous for science, because in the time it takes for a consensus to emerge that a paper is no good, several research teams might have already started trying to replicate and extend the reported findings, and several dozen other researchers might have uncritically cited their paper peripherally in their own work. This delay is probably why, as John Ioannidis’ work so elegantly demonstrates, major studies published in high-impact journals tend to exert a disproportionate influence on the literature long after they’ve been resoundingly discredited.

The Arsenic Effect, if we can call it that, provides a nice illustration of the impact of new media on scientific communication. It’s a safe bet that there are now very few people who do anything even vaguely related to the NASA team’s research who haven’t been made aware that the reported findings are controversial. Which means that the process of attempting to replicate (or falsify) the findings will proceed much more quickly than it might have ten or twenty years ago, and there probably won’t be very many people who cite the Science paper as compelling evidence of terrestrial arsenic-based life. Perhaps more importantly, as researchers get used to the idea that their high-profile work is going to be instantly evaluated by thousands of pairs of highly trained eyes, any of which might be attached to a highly prolific pair of typing hands, there will be an increasingly strong disincentive to avoid being careless. That isn’t to say that bad science will disappear, of course; just that, in cases where the badness reflects a pressure to tell a good story at all costs, we’ll probably see less of it.

elsewhere on the net

Some neat links from the past few weeks:

  • You Are No So Smart: A celebration of self-delusion. An excellent blog by journalist David McCraney that deconstructs common myths about the way the mind works.
  • NPR has a great story by Jon Hamilton about the famous saga of Einstein’s brain and what it’s helped teach us about brain function. [via Carl Zimmer]
  • The Neuroskeptic has a characteristically excellent 1,000 word explanation of how fMRI works.
  • David Rock has an interesting post on some recent work from Baumeister’s group purportedly showing that it’s good to believe in free will (whether or not it exists). My own feeling about this is that Baumeister’s not really studying people’s philosophical views about free will, but rather a construct closely related to self-efficacy and locus of control. But it’s certainly an interesting line of research.
  • The Prodigal Academic is a great new blog about all things academic. I’ve found it particularly interesting since several of the posts so far have been about job searches and job-seeking–something I’ll be experiencing my fill of over the next few months.
  • Prof-like Substance has a great 5-part series (1, 2, 3, 4, 5) on how blogging helps him as an academic. My own (much less eloquent) thoughts on that are here.
  • Cameron Neylon makes a nice case for the development of social webs for data mining.
  • Speaking of data mining, Michael Driscoll of Dataspora has an interesting pair of posts extolling the virtues of Big Data.
  • And just to balance things out, there’s this article in the New York Times by John Allen Paulos that offers some cautionary words about the challenges of using empirical data to support policy decisions.
  • On a totally science-less note, some nifty drawings (or is that photos?) by Ben Heine (via Crooked Brains):

academic bloggers on blogging

Is it wise for academics to blog? Depends on who you ask. Scott Sumner summarizes his first year of blogging this way:

Be careful what you wish for.  Last February 2nd I started this blog with very low expectations.  During the first three weeks most of the comments were from Aaron Jackson and Bill Woolsey.  I knew I wasn’t a good writer, years ago I got a referee report back from an anonymous referee (named McCloskey) who said “if the author had used no commas at all, his use of commas would have been more nearly correct.“  Ouch!  But it was true, others said similar things.  And I was also pretty sure that the content was not of much interest to anyone.

Now my biggest problem is time—I spend 6 to 10 hours a day on the blog, seven days a week.  Several hours are spent responding to reader comments and the rest is spent writing long-winded posts and checking other economics blogs.  And I still miss many blogs that I feel I should be reading. …

Regrets?  I’m pretty fatalistic about things.  I suppose it wasn’t a smart career move to spend so much time on the blog.  If I had ignored my commenters I could have had my manuscript revised by now. …  And I really don’t get any support from Bentley, as far as I know the higher ups don’t even know I have a blog. So I just did 2500 hours of uncompensated labor.

I don’t think Sumner actually regrets blogging (as the rest of his excellent post makes clear), but he does seem to think it’s hurt him professionally in some ways–most notably, because of all the time he spends blogging that he could be doing something else (like revising that manuscript).

Andrew Gelman has a very different take:

I agree with Sethi that Sumner’s post is interesting and captures much of the blogging experience. But I don’t agree with that last bit about it being a bad career move. Or perhaps Sumner was kidding? (It’s notoriously difficult to convey intonation in typed speech.) What exactly is the marginal value of his having a manuscript revised? It’s not like Bentley would be compensating him for that either, right? For someone like Sumner (or, for that matter, Alex Tabarrok or Tyler Cowen or my Columbia colleague Peter Woit), blogging would seem to be an excellent career move, both by giving them and their ideas much wider exposure than they otherwise would’ve had, and also (as Sumner himself notes) by being a convenient way to generate many thousands of words that can be later reworked into a book. This is particularly true of Sumner (more than Tabarrok or Cowen or, for that matter, me) because he tends to write long posts on common themes. (Rajiv Sethi, too, might be able to put together a book or some coherent articles by tying together his recent blog entries.)

Blogging and careers, blogging and careers . . . is blogging ever really bad for an academic career? I don’t know. I imagine that some academics spend lots of time on blogs that nobody reads, and that could definitely be bad for their careers in an opportunity-cost sort of way. Others such as Steven Levitt or Dan Ariely blog in an often-interesting but sometimes careless sort of way. This might be bad for their careers, but quite possibly they’ve reached a level of fame in which this sort of thing can’t really hurt them anymore. And this is fine; such researchers can make useful contributions with their speculations and let the Gelmans and Fungs of the world clean up after them. We each have our role in this food web. … And then of course there are the many many bloggers, academic and otherwise, whose work I assume I would’ve encountered much more rarely were they not blogging.

My own experience falls much more in line with Gelman’s here; my blogging experience has been almost wholly positive. Some of the benefits I’ve found to blogging regularly:

  • I’ve had many interesting email exchanges with people that started via a comment on something I wrote, and some of these will likely turn into collaborations at some point in the future.
  • I’ve been exposed to lots of interesting things (journal articles, blog posts, datasets, you name it) I wouldn’t have come across otherwise–either via links left in comments or sent by email, or while rooting around the web for things to write about.
  • I’ve gotten to publicize and promote my own research, which is always nice. As Gelman points out, it’s easier to learn about other people’s work if those people are actively blogging about it. I think that’s particularly true for people who are just starting out their careers.
  • I think blogging has improved both my ability and my willingness to write. By nature, I don’t actually like writing very much, and (like most academics I know) I find writing journal articles particularly unpleasant. Forcing myself to blog (semi-)regularly has instilled a certain discipline about writing that I haven’t always had, and if nothing else, it’s good practice.
  • I get to share ideas and findings I find interesting and/or important with other people. This is already what most academics do over drinks at conferences (and I think it’s a really important part of science), and blogging seems like a pretty natural extension.

All this isn’t to say that there aren’t any potential drawbacks to blogging. I think there are at least two important ones. One is the obvious point that, unless you’re blogging anonymously, it’s probably unwise to say things online that you wouldn’t feel comfortable saying in person. So, despite being a class-A jackass pretty critical by nature, I try to discuss things I like as often as things I don’t like–and to keep the tone constructive whenever I do the latter.

The other potential drawback, which both Sumner and Gelman allude to, is the opportunity cost. If you’re spending half of your daylight hours blogging, there’s no question it’s going to have an impact on your academic productivity. But in practice, I don’t think blogging too much is a problem many academic bloggers have. I usually find myself wishing most of the bloggers I read posted more often. In my own case, I almost exclusively blog after around 9 or 10 pm, when I’m no longer capable of doing sustained work on manuscripts anyway (I’m generally at my peak in the late morning and early afternoon). So, for me, blogging has replaced about ten hours a week of book reading/TV watching/web surfing, while leaving the amount of “real” work I do largely unchanged. That’s not really much of a cost, and I might even classify it as another benefit. With the admittedly important caveat that watching less television has made me undeniably useless at trivia night.

the grand canada tour, 2010 edition

Blogging will be slow(er than normal) for the next couple of weeks. On Wednesday I’m off on a long-awaited Grand Tour of Canada, 2010 edition. The official purpose of the trip is the CNS meeting in Montreal, but seeing as I’m from Canada and most of my family is in Toronto and Ottawa, I’ll be tacking on a few days of R&R at either end of the trip, so I’ll be gone for 10 days. By R&R I mean that I’ll be spending most of my time in Toronto at cheap all-you-can-eat sushi restaurants, and most of my time in Ottawa sleeping in till noon in my mom’s basement.  So really, I guess my plan for the next two weeks is to turn seventeen again.

While I’m in Ottawa, I’ll also be giving a talk at Carleton University. I’d like to lump this under the “invited talks” section of my vita–you know, just to make myself seem slightly more important (being invited somewhere means people actually want to hear you say stuff!)–but I’m not sure it counts as “invited” if you invite yourself to give a talk somewhere else. Which is basically what happened; I did my undergraduate degree at Carleton, so when I emailed my honors thesis advisor to ask if I could give a talk when I was in town, he probably felt compelled to humor me, much as I know he’d secretly like to say no (sorry John!). At any rate, the talk will be closely based on this paper on the relation between personality and word use among bloggers. Amazingly enough, it turns out you can learn something (but not that much) about people from what they write on their blogs. It’s not the most exciting conclusion in the world, but I think there are some interesting results hidden away in there somewhere. If you happen to come across any of them, let me know.