Why I still won’t review for or publish with Elsevier–and think you shouldn’t either

In 2012, I signed the Cost of Knowledge pledge, and stopped reviewing for, and publishing in, all Elsevier journals. In the four years since, I’ve adhered closely to this policy; with a couple of exceptions (see below), I’ve turned down every review request I’ve received from an Elsevier-owned journal, and haven’t sent Elsevier journals any of my own papers for publication.

Contrary to what a couple of people I talked to at the time intimated might happen, my scientific world didn’t immediately collapse. The only real consequences I’ve experienced as a result of avoiding Elsevier are that (a) on perhaps two or three occasions, I’ve had to think a little bit longer about where to send a particular manuscript, and (b) I’ve had a few dozen conversations (all perfectly civil) about Elsevier and/or academic publishing norms that I otherwise probably wouldn’t have had. Other than that, there’s been essentially no impact on my professional life. I don’t feel that my unwillingness to publish in NeuroImage, Neuron, or Journal of Research in Personality has hurt my productivity or reputation in any meaningful way. And I continue to stand by my position that it’s a mistake for scientists to do business with a publishing company that actively lobbies against the scientific community’s best interests.

While I’ve never hidden the fact that I won’t deal with Elsevier, and am perfectly comfortable talking about the subject when it comes up, I also haven’t loudly publicized my views. Aside from a parenthetical mention of the issue in one or two (sometimes satirical) blog posts, and an occasional tweet, I’ve never written anything vocally suggesting that others adopt the same stance. The reason for this is not that I don’t believe it’s an important issue; it’s that I thought Elsevier’s persistently antagonistic behavior towards scientists’ interests was common knowledge, and that most scientists continue to provide their free expert labor to Elsevier because they’ve decided that the benefits outweigh the costs. In other words, I was under the impression that other people share my facts, just not my interpretation of the facts.

I now think I was wrong about this. A series of tweets a few months ago (yes, I know, I’m slow to get blog posts out these days) prompted my reevaluation. It began with this:

Which led a couple of people to ask why I don’t review for Elsevier. I replied:


All of this information is completely public, and much of it features prominently in Elsevier’s rather surreal Wikipedia entry–nearly two thirds of which consists of “Criticism and Controversies” (and no, I haven’t personally contributed anything to that entry). As such, I assumed Elsevier’s track record of bad behavior was public knowledge. But the responses to my tweets suggested otherwise. And in the months since, I’ve had several other twitter or real-life conversations with people where it quickly became clear that the other party was not, in fact, aware of (m)any of the scandals Elsevier has been embroiled in.

In hindsight, this shouldn’t have surprised me. There’s really no good reason why most scientists should be aware of what Elsevier’s been up to all this time. Sure, most scientists cross path with Elsevier at some point; but so what? It’s not as though I thoroughly research every company I have contractual dealings with; I usually just go about my business and assume the best about the people I’m dealing with–or at the very least, I try not to assume the worst.

Unfortunately, sometimes it turns out that that assumption is wrong. And on those occasions, I generally want to know about it. So, in that spirit, I thought I’d expand on my thoughts about Elsevier beyond the 140-character format I’ve adopted in the past, in the hopes that other people might also be swayed to at least think twice about submitting their work to Elsevier journals.

Is Elsevier really so evil?

Yeah, kinda. Here’s a list of just some of the shady things Elsevier has been previously caught doing–and none of which, as far as I know, the company contests at this point:

  • They used to organize arms trade fairs, until a bunch of academics complained that a scholarly publisher probably shouldn’t be in the arms trade, at which point they sold that division off;
  • In 2009, they were caught for having created and sold half a dozen entire fake journals to pharmaceutical companies (e.g., Merck), so that those companies could fill the pages of the journals, issue after issue, with reprinted articles that cast a positive light on their drugs;
  • They regularly sell access to articles they don’t own, including articles licensed for non-commercial use–in clear contravention of copyright law, and despite repeated observations by academics that this kind of thing should not be technically difficult to stop if Elsevier actually wanted it to stop;
  • Their pricing model is based around the concept of the “Big Deal”: Elsevier (and, to be fair, most other major publishers) forces universities to pay for huge numbers of their journals at once by pricing individual journals prohibitively, ensuring that institutions can’t order only the journals they think they’ll actually use (this practice is very much like the “bundling” exercised by the cable TV industry); they also bar customers from revealing how much they paid for access, and freedom-of-information requests reveal enormous heterogeneity across universities, often at costs that are prohibitive to libraries;
  • They recently bought the SSRN preprint repository, and after promising to uphold SSRN’s existing operating procedures, almost immediately began to remove articles that were legally deposited on the service, but competed with “official” versions published elsewhere;
  • They have repeatedly spurned requests from the editorial boards of their journals to lower journal pricing, decrease open access fees, or make journals open access; this has resulted in several editorial boards abandoning the Elsevier platform wholesale and moving their operation elsewhere (Lingua being perhaps the best-known example)–often taking large communities with them;
  • Perhaps most importantly (at least in my view), they actively lobbied the US government against open access mandates, making multiple donations to the congressional sponsors of a bill called the Research Works Act that would have resulted in the elimination of the current law mandating deposition of all US government-funded scientific works in public repositories within 12 months after publication.

The pattern in these cases is almost always the same: Elsevier does something that directly works against the scientific community’s best interests (and in some cases, also the law), and then, when it gets caught with its hand in the cookie jar, it apologizes and fixes the problem (well, at least to some degree; they somehow can’t seem to stop selling OA-licensed articles, because it is apparently very difficult for a multibillion dollar company to screen the papers that appear on its websites). A few months later, another scandal comes to light, and then the cycle repeats.

Elsevier is, of course, a large company, and one could reasonably chalk one or two of the above actions down to poor management or bad judgment. But there’s a point at which the belief that this kind of thing is just an unfortunate accident–as opposed to an integral part of the business model–becomes very difficult to sustain. In my case, I was aware of a number of the above practices before I signed The Cost of Knowledge pledge; for me, the straw that broke the camel’s back was Elsevier’s unabashed support of the Research Works Act. While I certainly don’t expect any corporation (for-profit or otherwise) to actively go out and sabotage its own financial interests, most organizations seem to know better than to publicly lobby for laws that would actively and unequivocally hurt the primary constituency they make their money off of. While Elsevier wasn’t alone in its support of the RWA, it’s notable that many for-profit (and most non-profit) publishers explicitly expressed their opposition to the bill (e.g., MIT Press, Nature Publishing Group, and the AAAS). To my mind, there wasn’t (and isn’t) any reason to support a company that, on top of arms sales, fake journals, and copyright violations, thinks it’s okay to lobby the government to make it harder for taxpayers to access the results of publicly-funded research that’s generated and reviewed at no cost to Elsevier itself. So I didn’t, and still don’t.

Objections (and counter-objections)

In the 4 years since I stopped writing or reviewing for Elsevier, I’ve had many conversations with colleagues about this issue. Since most of my colleagues don’t share my position (though there are a few exceptions), I’ve received a certain amount of pushback. While I’m always happy to engage on the issue, so far, I can’t say that I’ve found any of the arguments I’ve heard sufficiently compelling to cause me to change my position. I’m not sure if my arguments have led anyone else to change their view either, but in the interest of consolidating discussion in one place (if only so that I can point people to it in future, instead of reprising the same arguments over and over again), I thought I’d lay out all of the major objections I’ve heard to date, along with my response(s) to each one. If you have other objections you feel aren’t addressed here, please leave a comment, and I’ll do my best to address them (and perhaps add them to the list).

Without further ado, and in no particular order, here are the pro-Elsevier (or at least, anti-anti-Elsevier) arguments, as I’ve heard and understood them:

“You can’t really blame Elsevier for doing this sort of thing. Corporations exist to make money; they have a fiduciary responsibility to their shareholders to do whatever they legally can to increase revenue and decrease expenses.”

For what it’s worth, I think the “fiduciary responsibility” argument–which seemingly gets trotted out almost any time anyone calls out a publicly traded corporation for acting badly–is utterly laughable. As far as I can tell, the claim it relies on is both unverifiable and unenforceable. In practice, there is rarely any way for anyone to tell whether a particular policy will hurt or help a company’s bottom line, and virtually any action one takes can be justified post-hoc by saying that it was the decision-makers’ informed judgment that it was in the company’s best interest. Presumably part of the reason publishing groups like NPG or MIT Press don’t get caught pulling this kind of shit nearly as often as Elsevier is that part of their executives’ decision-making process includes thoughts like gee, it would be really bad for our bottom line if scientists caught wind of what we’re doing here and stopped giving us all this free labor. You can tell a story defending pretty much any policy, or its polar opposite, on grounds of fiduciary responsibility, but I think it’s very unlikely that anyone is ever going to knock on an Elsevier executive’s door threatening to call in the lawyers because Elsevier just hasn’t been working hard enough lately to sell fake journals.

That said, even if you were to disagree with my assessment, and decided to take the fiduciary responsibility argument at face value, it would still be completely and utterly irrelevant to my personal decision not to work for Elsevier any more. The fact that Elsevier is doing what it’s (allegedly) legally obligated to do doesn’t mean that I have to passively go along with it. Elsevier may be legally allowed or even obligated to try to take advantage of my labor, but I’m just as free to follow my own moral compass and refuse. I can’t imagine how my individual decision to engage in moral purchasing could possibly be more objectionable to anyone than a giant corporation’s “we’ll do anything legal to make money” policy.

“It doesn’t seem fair to single out Elsevier when all of the other for-profit publishers are just as bad.”

I have two responses to this. First, I think the record pretty clearly suggests that Elsevier does in fact behave more poorly than the vast majority of other major academic publishers (there are arguably a number of tiny predatory publishers that are worse–but of course, I don’t think anyone should review for or publish with them either!). It’s not that publishers like Springer or Wiley are without fault; but they at least don’t seem to get caught working against the scientific community’s interests nearly as often. So I think Elsevier’s particularly bad track record makes it perfectly reasonable to focus attention on Elsevier in particular.

Second, I don’t think it would, or should, make any difference to the analysis even if it turned out that Springer or Wiley were just as bad. The reason I refuse to publish with Elsevier is not that they’re the only bad apples, but that I know that they’re bad apples. The fact that there might be other bad actors we don’t know about doesn’t mean we shouldn’t take actions against the bad actors we do know about. In fact, it wouldn’t mean that even if we did know of other equally bad actors. Most people presumably think there are many charities worth giving money to, but when we learn that someone donated money to a breast cancer charity, we don’t get all indignant and say, oh sure, you give money to cancer, but you don’t think heart disease is a serious enough problem to deserve your support? Instead, we say, it’s great that you’re doing what you can–we know you don’t have unlimited resources.

Moreover, from a collective action standpoint, there’s a good deal to be said for making an example out of a single bad actor rather than trying to distribute effort across a large number of targets. The reality is that very few academics perceive themselves to be in a position to walk away from all academic publishers known to engage in questionable practices. Collective action provides a means for researchers to exercise positive force on the publishing ecosystem in a way that cannot be achieved by each individual researcher making haphazard decisions about where to send their papers. So I would argue that as long as researchers agree that (a) Elsevier’s policies hurt scientists and taxpayers, and (b) Elsevier is at the very least one of the worst actors, it makes a good deal of sense to focus our collective energy on Elsevier. I would hazard a guess that if a concerted action on the part of scientists had a significant impact on Elsevier’s bottom line, other publishers would sit up and take notice rather quickly.

“You can choose to submit your own articles wherever you like; that’s totally up to you. But when you refuse to review for all Elsevier journals, you do a disservice to your colleagues, who count on you to use your expertise to evaluate other people’s manuscripts and thereby help maintain the quality of the literature as a whole.”

I think this is a valid concern in the case of very early-career academics, who very rarely get invited to review papers, and have no good reason to turn such requests down. In such cases, refusing to review because Elsevier would indeed make everyone else’s life a little bit more difficult (even if it also helps a tiny bit to achieve the long-term goal of incentivizing Elsevier to either shape up or disappear). But I don’t think the argument carries much force with most academics, because most of us have already reached the review saturation point of our careers–i.e., the point at which we can’t possibly (or just aren’t willing to) accept all the review assignments we receive. For example, at this point, I average about 3 – 4 article reviews a month, and I typically turn down about twice that many invitations to review. If I accepted any invitations from Elsevier journals, I would simply have to turn down an equal number of invitations from non-Elsevier journals–almost invariably ones with policies that I view as more beneficial to the scientific community. So it’s not true that I’m doing the scientific community a disservice by refusing to review for Elsevier; if anything, I’m doing it a service by preferentially reviewing for journals that I believe are better aligned with the scientific community’s long-term interests.

Now, on fairly rare occasions, I do get asked to review papers focusing on issues that I think I have particularly strong expertise in. And on even rarer occasions, I have reason to think that there are very few if any other people besides me who would be able to write a review that does justice to the paper. In such cases, I willingly make an exception to my general policy. But it doesn’t happen often; in fact, it’s happened exactly twice in the past 4 years. In both cases, the paper in question was built to a very significant extent on work that I had done myself, and it seemed to me quite unlikely that the editor would be able to find another reviewer with the appropriate expertise given the particulars reported in the abstract. So I agreed to review the paper, even for an Elsevier journal, because to not do so would indeed have been a disservice to the authors. I don’t have any regrets about this, and I will do it again in future if the need arises. Exceptions are fine, and we shouldn’t let the perfect be the enemy of the good. But it simply isn’t true, in my view, that my general refusal to review for Elsevier is ever-so-slightly hurting science. On the contrary, I would argue that it’s actually ever-so-slightly helping it, by using my limited energies to support publishers and journals that work in favor of, rather than against, scientists’ interests.

“If everyone did as you do, Elsevier journals might fall apart, and that would impact many people’s careers. What about all the editors, publishing staff, proof readers, etc., who would all lose at least part of their livelihood?”

This is the universal heartstring-pulling argument, in that it can be applied to virtually any business or organization ever created that employs at least one person. For example, it’s true that if everyone stopped shopping at Wal-Mart, over a million Americans would lose their jobs. But given the externalities that Wal-Mart imposes on the American taxpayer, that hardly seems like a sufficient reason to keep shopping at Wal-Mart (note that I’m not saying you shouldn’t shop at Wal-Mart, just that you’re not under any moral obligation to view yourself as a one-person jobs program). Almost every decision that involves reallocation of finite resources hurts somebody; the salient question is whether, on balance, the benefits to the community as a whole outweigh the costs. In this case, I find it very hard to see how Elsevier’s policies benefit the scientific community as a whole when much cheaper, non-profit alternatives–to say nothing of completely different alternative models of scientific evaluation–are readily available.

It’s also worth remembering that the vast majority of the labor that goes into producing Elsevier’s journals is donated to Elsevier free of charge. Given Elsevier’s enormous profit margin (over 30% in each of the last 4 years), it strains credulity to think that other publishers couldn’t provide essentially the same services while improving the quality of life of the people who provide most of the work. For an example of such a model, take a look at Collabra, where editors receive a budget of $250 per paper (which comes out of the author publication charge) that they can divide up however they like between themselves, the reviewers, and publishing subsidies to future authors who lack funds (full disclosure: I’m an editor at Collabra). So I think an argument based on treating people well clearly weighs against supporting Elsevier, not in favor of it. If nothing else, it should perhaps lead one to question why Elsevier insists it can’t pay the academics who review its articles a nominal fee, given that paying for even a million reviews per year (surely a gross overestimate) at $200 a pop would still only eat up less than 20% of Elsevier’s profit in each of the past few years.

“Whatever you may think of Elsevier’s policies at the corporate level, the editorial boards at the vast majority of Elsevier journals function autonomously, with no top-down direction from the company. Any fall-out from a widespread boycott would hurt all of the excellent editors at Elsevier journals who function with complete independence–and by extension, the field as a whole.”

I’ve now heard this argument from at least four or five separate editors at Elsevier journals, and I don’t doubt that its premise is completely true. Meaning, I’m confident that the scientific decisions made by editors at Elsevier journals on a day-to-day basis are indeed driven entirely by scientific considerations, and aren’t influenced in any way by publishing executives. That said, I’m completely unmoved by this argument, for two reasons. First, the allocation of resources–including peer reviews, submitted manuscripts, and editorial effort–is, to a first approximation, a zero-sum game. While I’m happy to grant that editorial decisions at Elsevier journals are honest and unbiased, the same is surely true of the journals owned by virtually every other publisher. So refusing to send a paper to NeuroImage doesn’t actually hurt the field as a whole in any way, unless one thinks that there is a principled reason why the editorial process at Cerebral Cortex, Journal of Neuroscience, or Journal of Cognitive Neuroscience should be any worse. Obviously, there can be no such reason. If Elsevier went out of business, many of its current editors would simply move to other journals, where they would no doubt resume making equally independent decisions about the manuscripts they receive. As I noted above, in a number of cases, entire editorial boards at Elsevier journals have successfully moved wholesale to new platforms. So there is clearly no service Elsevier provides that can’t in principle be provided more cheaply by other publishers or plaforms that aren’t saddled with Elsevier’s moral baggage or absurd profit margins.

Second, while I don’t doubt the basic integrity of the many researchers who edit for Elsevier journals, I also don’t think they’re completely devoid of responsibility for the current state of affairs. When a really shitty company offers you a position of power, it may be true that accepting that position–in spite of the moral failings of your boss’s boss’s boss–may give you the ability to do some real good for the community you care about. But it’s also true that you’re still working for a really shitty company, and that your valiant efforts could at any moment be offset by some underhanded initiative in some other branch of the corporation. Moreover, if you’re really good at your job, your success–whatever its short-term benefits to your community–will generally serve to increase your employer’s shit-creating capacity. So while I don’t think accepting an editorial position at an Elsevier journal makes anyone a bad person (some of my best friends are editors for Elsevier!), I also see no reason for anyone to voluntarily do business with a really shitty company rather than a less shitty one. As far as I can tell, there is no service I care about that NeuroImage offers me but Cerebral Cortex or The Journal of Neuroscience don’t. As a consequence, it seems reasonable for me to submit my papers to journals owned by companies that seem somewhat less intent on screwing me and my institution out of as much money as possible. If that means that some very good editors at NeuroImage ultimately have to move to JNeuro, JCogNeuro, or (dare I say it!) PLOS ONE, I think I’m okay with that.

“It’s fine for you to decide not to deal with Elsevier, but you don’t have a right to make that decision for your colleagues or trainees when they’re co-authors on your papers.”

This is probably the only criticism I hear regularly that I completely agree with. Which is why I’ve always been explicit that I can and will make exceptions when required. Here’s what I said when I originally signed The Cost of Knowledge years ago:

costofknowledge

Basically, my position is that I’ll still submit a manuscript to an Elsevier journal if either (a) I think a trainee’s career would be significantly disadvantaged by not doing so, or (b) I’m not in charge of a project, and have no right to expect to exercise control over where a paper is submitted. The former has thankfully never happened so far (though I’m always careful to make it clear to trainees that if they really believe that it’s important to submit to a particular Elsevier journal, I’m okay with it). As for the latter, in the past 4 years, I’ve been a co-author on two Elsevier papers (1, 2). In both cases, I argued against submitting the paper to those journals, but was ultimately overruled. I don’t have any problem with either of those decisions, and remain on good terms with both lead authors. If I collaborate with you on a project, you can expect to receive an email from me suggesting in fairly strong terms that we should consider submitting to a non-Elsevier-owned journal, but I certainly won’t presume to think that what makes sense to me must also make sense to you.

“Isn’t it a bit silly to think that your one-person boycott of Elsevier is going to have any meaningful impact?”

No, because it isn’t a one-person boycott. So far, over 16,000 researchers have signed The Cost of Knowledge pledge. And there are very good reasons to think that the 16,000-strong (and growing!) boycott has already had important impacts. For one thing, Elsevier withdrew its support of the RWA in 2012 shortly after The Cost of Knowledge was announced (and several thousand researchers quickly signed on). The bill itself was withdrawn shortly after that. That seems like a pretty big deal to me, and frankly I find it hard to imagine that Elsevier would have voluntarily stopped lobbying Congress this way if not for thousands of researchers putting their money where their mouth is.

Beyond that clear example, it’s hard to imagine that 16,000 researchers walking away from a single publisher wouldn’t have a significant impact on the publishing landscape. Of course, there’s no clear way to measure that impact. But consider just a few points that seem difficult to argue against:

  • All of the articles that would have been submitted to Elsevier journals presumably ended up in other publishers’ journals (many undoubtedly run by OA publishers). There has been continual growth in the number of publishers and journals; some proportion of that seems almost guaranteed to reflect the diversion of papers away from Elsevier.

  • Similarly, all of the extra time spent reviewing non-Elsevier articles instead of Elsevier articles presumably meant that other journals received better scrutiny and faster turnaround times than they would have otherwise.

  • A number of high-profile initiatives–for example, the journal Glossa–arose directly out of researchers’ refusal to keep working with Elsevier (and many others are likely to have arisen indirectly, in part). These are not insignificant. Aside from their immediate impact on the journal landscape, the involvement of leading figures like Timothy Gowers in the movement to develop better publishing and evaluation options is likely to have a beneficial long-term impact.

All told, it seems to me that, far from being ineffectual, the Elsevier boycott–consisting of nothing more than individual researchers cutting ties with the publisher–has actually achieved a considerable amount in the past 4 years. Of course, Elsevier continues to bring in huge profits, so it’s not like it’s in any danger of imminent collapse (nor should that be anyone’s goal). But I think it’s clear that, on balance, the scientific publishing ecosystem is healthier for having the boycott in place, and I see much more reason to push for even greater adoption of the policy than to reconsider it.

More importantly, I think the criticism that individual action has limited efficacy overlooks what is probably the single biggest advantage the boycott has in this case: it costs a researcher essentially nothing. If I were to boycott, say, Trader Joe’s, on the grounds that it mistreats its employees (for the record, I don’t think it does), my quality of life would go down measurably, as I would have to (a) pay more for my groceries, and (b) travel longer distances to get them (there’s a store just down the street from my apartment, so I shop there a lot). By contrast, cutting ties with Elsevier has cost me virtually nothing so far. So even if the marginal benefit to the scientific community of each additional individual boycotting Elsevier is very low, the cost to that individual will typically be still much lower. Which, in principle, makes it very easy to organize and maintain a collective action of this sort on a very large scale (and is probably a lot of what explains why over 16,000 researchers have already signed on).

What you can do

Let’s say you’ve read this far and find yourself thinking, okay, that all kind of makes sense. Maybe you agree with me that Elsevier is an amazingly shitty company whose business practices actively bite the hand that feeds it. But maybe you’re also thinking, well, the thing is, I almost exclusively publish primary articles in the field of neuroimaging [or insert your favorite Elsevier-dominated discipline here], and there’s just no way I can survive without publishing in Elsevier journals. So what can I do?

The first thing to point out is that there’s a good chance your fears are at least somewhat (and possibly greatly) exaggerated. As I noted at the outset of this post, I was initially a bit apprehensive about the impact that taking a principled stand would have on my own career, but I can’t say that I perceive any real cost to my decision, nearly five years on. One way you can easily see this is to observe that most people are surprised when I first tell them I haven’t published in Elsevier journals in five years. It’s not like the absence would ever jump out at anyone who looked at my publication list, so it’s unclear how it could hurt me. Now, I’m not saying that everyone is in a position to sign on to a complete boycott without experiencing some bumps in the road. But I do think many more people could do so than might be willing to admit it at first. There are very few fields that are completely dominated by Elsevier journals. Neuroimaging is probably one of the fields where Elsevier’s grip is strongest, but I publish several neuroimaging-focused papers a year, and have never had to work very hard to decide where to submit my papers next.

That said, the good news is that you can still do a lot to actively work towards an Elsevier-free world even if you’re unable or unwilling to completely part ways with the publisher. Here are a number of things you can do that take virtually no work, are very unlikely to harm your career in any meaningful way, and are likely to have nearly the same collective benefit as a total boycott:

  • Reduce or completely eliminate your Elsevier reviewing and/or editorial load. Even if you still plan to submit your papers to Elsevier journals, nothing compels you to review or edit for them. You should, of course, consider the pros and cons of turning down any review request; and, as I noted above, it’s fine to make occasional exceptions in cases where you think declining to review a particular paper would be a significant disservice to your peers. But such occasions are–at least in my own experience–quite rare. As I noted above, one of the reasons I’ve had no real compunction about rejecting Elsevier review requests is that I already receive many more requests than I can handle, so declining Elsevier reviews just means I review more for other (better) publishers. If you’re at an early stage of your career, and don’t get asked to review very often, the considerations may be different–though of course, you could still consider turning down the review and doing something nice for the scientific community with the time you’ve saved (e.g., reviewing openly on site like PubPeer or PubMed Commons, or spend some time making all the data, code, and materials from your previous work openly available).

  • Make your acceptance of a review assignment conditional on some other prosocial perk. As a twist on simply refusing Elsevier review invitations, you can always ask the publisher for some reciprocal favor. You could try asking for monetary compensation, of course–and in the extremely unlikely event that Elsevier obliges, you could (if needed) soothe your guilty conscience by donating your earnings to a charity of your choice. Alternatively, you could try to extract some concession from the journal that would help counteract your general aversion to reviewing for Elsevier. Chris Gorgolewski provided one example in this tweet:

Mandating open science practices (e.g., public deposition of data and code) as a requirement for review is something that many people strongly favor completely independently of commercial publishers’ shenanigans (see my own take here). Making one’s review conditional on an Elsevier journal following best practices is a perfectly fair and even-handed approach, since there are other journals that either already mandate such standards (e.g., PLOS ONE), or are likely to be able to oblige you. So if you get an affirmative response from an Elsevier journal, then great–it’s still Elsevier, but at least you’ve done something useful to improve their practices. If you get a negative review, well, again, you can simply reallocate your energy somewhere else.

  • Submit fewer papers to Elsevier journals. If you publish, say, 5 – 10 fMRI articles a year, it’s completely understandable if you might not feel quite ready to completely give up on NeuroImage and the other three million neuroimaging journals in Elsevier’s stable. Fortunately, you don’t have to. This is a nice example of the Pareto principle in action: 20% of the effort goes maybe 80% of the way in this case. All you have to do to exert almost exactly the same impact as a total boycott of Elsevier is drop NeuroImage (or whatever other journal you routinely submit to) to the bottom of the queue of whatever journals you perceive as being in the same class. So, for example, instead of reflexively thinking, “oh, I should send this to NeuroImage–it’s not good enough for Nature Neuroscience, but I don’t want to send it to just any dump journal”, you can decide to submit it to Cerebral Cortex or The Journal of Neuroscience first, and only go to NeuroImage if the first two journals reject it. Given that most Elsevier journals have a fairly large equivalence class of non-Elsevier journals, a policy like this one would almost certainly cut submissions to Elsevier journals significantly if widely implemented by authors–which would presumably reduce the perceived prestige of those journals still further, potentially precipitating a death spiral.

  • Go cold turkey. Lastly, you could always just bite the bullet and cut all ties with Elsevier. Honestly, it really isn’t that bad. As I’ve already said, the fall-out in my case has been considerably smaller than I thought it would be when I signed The Cost of Knowledge pledge as a post-doc (i.e., I expected it to have some noticeable impact, but in hindsight I think it’s had essentially none). Again, I recognize that not everyone is in a position to do this. But I do think that the reflexive “that’s a crazy thing to do” reaction that some people seem to have when The Cost of Knowledge boycott is brought up isn’t really grounded in a careful consideration of the actual risks to one’s career. I don’t know how many of the 16,000 signatories to the boycott have had to drop out of science as a direct result of their decision to walk away from Elsevier, but I’ve never heard anyone suggest this happened to them, and I suspect the number is very, very small.

The best thing about all of the above action items–with the possible exception of the last–is that they require virtually no effort, and incur virtually no risk. In fact, you don’t even have to tell anyone you’re doing any of them. Let’s say you’re a graduate student, and your advisor asks you where you want to submit your next fMRI paper. You don’t have to say “well, on principle, anywhere but an Elsevier journal” and risk getting into a long argument about the issue; you can just say “I think I’d like to try Cerebral Cortex.” Nobody has to know that you’re engaging in moral purchasing, and your actions are still almost exactly as effective. You don’t have to march down the street holding signs and chanting loudly; you don’t have to show up in front of anyone’s office to picket. You can do your part to improve the scientific publishing ecosystem just by making a few tiny decisions here and there–and if enough other people do the same thing, Elsevier and its peers will eventually be left with a stark choice: shape up, or crumble.

whether or not you should pursue a career in science still depends mostly on that thing that is you

I took the plunge a couple of days ago and answered my first question on Quora. Since Brad Voytek won’t shut up about how great Quora is, I figured I should give it a whirl. So far, Brad is not wrong.

The question in question is: “How much do you agree with Johnathan Katz’s advice on (not) choosing science as a career? Or how realistic is it today (the article was written in 1999)?” The Katz piece referred to is here. The gist of it should be familiar to many academics; the argument boils down to the observation that relatively few people who start graduate programs in science actually end up with permanent research positions, and even then, the need to obtain funding often crowds out the time one has to do actual science. Katz’s advice is basically: don’t pursue a career in science. It’s not an optimistic piece.

My answer is, I think, somewhat more optimistic. Here’s the full text:

The real question is what you think it means to be a scientist. Science differs from many other professions in that the typical process of training as a scientist–i.e., getting a Ph.D. in a scientific field from a major research university–doesn’t guarantee you a position among the ranks of the people who are training you. In fact, it doesn’t come close to guaranteeing it; the proportion of PhD graduates in science who go on to obtain tenure-track positions at research-intensive universities is very small–around 10% in most recent estimates. So there is a very real sense in which modern academic science is a bit of a pyramid scheme: there are a relatively small number of people at the top, and a lot of people on the rungs below laboring to get up to the top–most of whom will, by definition, fail to get there.

If you equate a career in science solely with a tenure-track position at a major research university, and are considering the prospect of a Ph.D. in science solely as an investment intended to secure that kind of position, then Katz’s conclusion is difficult to escape. He is, in most respects, correct: in most biomedical, social, and natural science fields, science is now an extremely competitive enterprise. Not everyone makes it through the PhD; of those who do, not everyone makes it into–and then through–one more more postdocs; and of those who do that, relatively few secure tenure-track positions. Then, of those few “lucky” ones, some will fail to get tenure, and many others will find themselves spending much or most of their time writing grants and managing people instead of actually doing science. So from that perspective, Katz is probably right: if what you mean when you say you want to become a scientist is that you want to run your own lab at a major research university, then your odds of achieving that at the outset are probably not very good (though, to be clear, they’re still undoubtedly better than your odds of becoming a successful artist, musician, or professional athlete). Unless you have really, really good reasons to think that you’re particularly brilliant, hard-working, and creative (note: undergraduate grades, casual feedback from family and friends, and your own internal gut sense do not qualify as really, really good reasons), you probably should not pursue a career in science.

But that’s only true given a rather narrow conception where your pursuit of a scientific career is motivated entirely by the end goal rather than by the process, and where failure is anything other than ending up with a permanent tenure-track position. By contrast, if what you’re really after is an environment in which you can pursue interesting questions in a rigorous way, surrounded by brilliant minds who share your interests, and with more freedom than you might find at a typical 9 to 5 job, the dream of being a scientist is certainly still alive, and is worth pursuing. The trivial demonstration of this is that if you’re one of the many people who actuallyenjoy the graduate school environment (yes, they do exist!), it may not even matter to you that much whether or not you have a good shot of getting a tenure-track position when you graduate.

To see this, imagine that you’ve just graduated with an undergraduate degree in science, and someone offers you a choice between two positions for the next six years. One position is (relatively) financially secure, but involves rather boring work of quesitonable utility to society, an inflexible schedule, and colleagues who are mostly only there for a paycheck. The other position has terrible pay, but offers fascinating and potentially important work, a flexible lifestyle, and colleagues who are there because they share your interests and want to do scientific research.

Admittedly, real-world choices are rarely this stark. Many non-academic jobs offer many of the same perceived benefits of academia (e.g., many tech jobs offer excellent working conditions, flexible schedules, and important work). Conversely, many academic environments don’t quite live up to the ideal of a place where you can go to pursue your intellectual passion unfettered by the annoyances of “real” jobs–there’s often just as much in the way of political intrigue, personality dysfunction, and menial due-paying duties. But to a first approximation, this is basically the choice you have when considering whether to go to graduate school in science or pursue some other career: you’re trading financial security and a fixed 40-hour work week against intellectual engagement and a flexible lifestyle. And the point to note is that, even if we completely ignore what happens after the six years of grad school are up, there is clearly a non-negligible segment of the population who would quite happy opt for the second choice–even recognizing full well that at the end of six years they may have to leave and move onto something else, with little to show for their effort. (Of course, in reality we don’t need to ignore what happens after six years, because many PhDs who don’t get tenure-track positions find rewarding careers in other fields–many of them scientific in nature. And, even though it may not be a great economic investment, having a Ph.D. in science is a great thing to be able to put on one’s resume when applying for a very broad range of non-academic positions.)

The bottom line is that whether or not you should pursue a career in science has as much or more to do with your goals and personality as it does with the current environment within or outside of (academic) science. In an ideal world (which is certainly what the 1970s as described by Katz sound like, though I wasn’t around then), it wouldn’t matter: if you had any inkling that you wanted to do science for a living, you would simply go to grad school in science, and everything would probably work itself out. But given real-world constraints, it’s absolutely essentially that you think very carefully about what kind of environment makes you happy and what your expectations and goals for the future are. You have to ask yourself: Am I the kind of person who values intellectual freedom more than financial security? Do I really love the process of actually doing science–not some idealized movie version of it, but the actual messy process–enough to warrant investing a huge amount of my time and energy over the next few years? Can I deal with perpetual uncertainty about my future? And ultimately, would I be okay doing something that I really enjoy for six years if at the end of that time I have to walk away and do something very different?

If the answer to all of these questions is yes–and for many people it is!–then pursuing a career in science is still a very good thing to do (and hey, you can always quit early if you don’t like it–then you’ve lost very little time!). If the answer to any of them is no, then Katz may be right. A prospective career in science may or may not be for you, but at the very least, you should carefully consider alternative prospects. There’s absolutely no shame in going either route; the important thing is just to make an honest decision that takes the facts as they are and not as you wish that they were.

A couple of other thoughts I’ll add belatedly:

  • Calling academia a pyramid scheme is admittedly a bit hyperbolic. It’s true that the personnel structure in academia broadly has the shape of a pyramid, but that’s true of most organizations in most other domains too. Pyramid schemes are typically built on promises and lies that (almost by definition) can’t be realized, and I don’t think many people who enter a Ph.D. program in science can claim with a straight face that they were guaranteed a permanent research position at the end of the road (or that it’s impossible to get such a position). As I suggested in this post, it’s much more likely that everyone involved is simply guilty of minor (self-)deception: faculty don’t go out of their way to tell prospective students what the odds are of actually getting a tenure-track position, and prospective grad students don’t work very hard to find out the painful truth, or to tell faculty what their real intentions are after they graduate. And it may actually be better for everyone that way.
  • Just in case it’s not clear from the above, I’m not in any way condoning the historically low levels of science funding, or the fact that very few science PhDs go on to careers in academic research. I would love for NIH and NSF budgets (or whatever your local agency is) to grow substantially–and for everyone get exactly the kind of job they want, academic or not. But that’s not the world we live in, so we may as well be pragmatic about it and try to identify the conditions under which it does or doesn’t make sense to pursue a career in science right now.
  • I briefly mention this above, but it’s probably worth stressing that there are many jobs outside of academia that still allow one to do scientific research, albeit typically with less freedom (but often for better hours and pay). In particular, the market for data scientists is booming right now, and many of the hires are coming directly from academia. One lesson to take away from this is: if you’re in a science Ph.D. program right now, you should really spend as much time as you can building up your quantitative and technical skills, because they could very well be the difference between a job that involves scientific research and one that doesn’t in the event you leave academia. And those skills will still serve you well in your research career even if you end up staying in academia.

 

the ‘decline effect’ doesn’t work that way

Over the last four or five years, there’s been a growing awareness in the scientific community that science is an imperfect process. Not that everyone used to think science was a crystal ball with a direct line to the universe or anything, but there does seem to be a growing recognition that scientists are human beings with human flaws, and are susceptible to common biases that can make it more difficult to fully trust any single finding reported in the literature. For instance, scientists like interesting results more than boring results; we’d rather keep our jobs than lose them; and we have a tendency to see what we want to see, even when it’s only sort-of-kind-of there, and sometimes not there at all. All of these things contrive to produce systematic biases in the kinds of findings that get reported.

The single biggest contributor to the zeitgeist shift nudge is undoubtedly John Ioannidis (recently profiled in an excellent Atlantic article), whose work I can’t say enough good things about (though I’ve tried). But lots of other people have had a hand in popularizing the same or similar ideas–many of which actually go back several decades. I’ve written a bit about these issues myself in a number of papers (1, 2, 3) and blog posts (1, 2, 3, 4, 5), so I’m partial to such concerns. Still, important as the role of the various selection and publication biases is in charting the course of science, virtually all of the discussions of these issues have had a relatively limited audience. Even Ioannidis’ work, influential as it’s been, has probably been read by no more than a few thousand scientists.

Last week, the debate hit the mainstream when the New Yorker (circulation: ~ 1 million) published an article by Jonah Lehrer suggesting–or at least strongly raising the possibility–that something might be wrong with the scientific method. The full article is behind a paywall, but I can helpfully tell you that some people seem to have un-paywalled it against the New Yorker’s wishes, so if you search for it online, you will find it.

The crux of Lehrer’s argument is that many, and perhaps most, scientific findings fall prey to something called the “decline effect”: initial positive reports of relatively large effects are subsequently followed by gradually decreasing effect sizes, in some cases culminating in a complete absence of an effect in the largest, most recent studies. Lehrer gives a number of colorful anecdotes illustrating this process, and ends on a decidedly skeptical (and frankly, terribly misleading) note:

The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

While Lehrer’s article received pretty positive reviews from many non-scientist bloggers (many of whom, dismayingly, seemed to think the take-home message was that since scientists always change their minds, we shouldn’t trust anything they say), science bloggers were generally not very happy with it. Within days, angry mobs of Scientopians and Nature Networkers started murdering unicorns; by the end of the week, the New Yorker offices were reduced to rubble, and the scientists and statisticians who’d given Lehrer quotes were all rumored to be in hiding.

Okay, none of that happened. I’m just trying to keep things interesting. Anyway, because I’ve been characteristically lazy slow on the uptake, by the time I got around to writing this post you’re now reading, about eighty hundred and sixty thousand bloggers had already weighed in on Lehrer’s article. That’s good, because it means I can just direct you to other people’s blogs instead of having to do any thinking myself. So here you go: good posts by Games With Words (whose post tipped me off to the article), Jerry Coyne, Steven Novella, Charlie Petit, and Andrew Gelman, among many others.

Since I’ve blogged about these issues before, and agree with most of what’s been said elsewhere, I’ll only make one point about the article. Which is that about half of the examples Lehrer talks about don’t actually seem to me to qualify as instances of the decline effect–at least as Lehrer defines it. The best example of this comes when Lehrer discusses Jonathan Schooler’s attempt to demonstrate the existence of the decline effect by running a series of ESP experiments:

In 2004, Schooler embarked on an ironic imitation of Rhine’s research: he tried to replicate this failure to replicate. In homage to Rhirie’s interests, he decided to test for a parapsychological phenomenon known as precognition. The experiment itself was straightforward: he flashed a set of images to a subject and asked him or her to identify each one. Most of the time, the response was negative—-the images were displayed too quickly to register. Then Schooler randomly selected half of the images to be shown again. What he wanted to know was whether the images that got a second showing were more likely to have been identified the first time around. Could subsequent exposure have somehow influenced the initial results? Could the effect become the cause?

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,“ Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size“–a standard statistical measure–“kept on getting smaller and smaller.“ The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhinos,“ Schooler said. “We found this strong paranormal effect, but it disappeared on us.“

This is a pretty bad way to describe what’s going on, because it makes it sound like it’s a general principle of data collection that effects systematically get smaller. It isn’t. The variance around the point estimate of effect size certainly gets smaller as samples get larger, but the likelihood of an effect increasing is just as high as the likelihood of it decreasing. The absolutely critical point Lehrer left out is that you would only get the decline effect to show up if you intervened in the data collection or reporting process based on the results you were getting. Instead, most of Lehrer’s article presents the decline effect as if it’s some sort of mystery, rather than the well-understood process that it is. It’s as though Lehrer believes that scientific data has the magical property of telling you less about the world the more of it you have. Which isn’t true, of course; the problem isn’t that science is malfunctioning, it’s that scientists are still (kind of!) human, and are susceptible to typical human biases. The unfortunate net effect is that Lehrer’s article, while tremendously entertaining, achieves exactly the opposite of what good science journalism should do: it sows confusion about the scientific process and makes it easier for people to dismiss the results of good scientific work, instead of helping people develop a critical appreciation for the amazing power science has to tell us about the world.

the male brain hurts, or how not to write about science

My wife asked me to blog about this article on CNN because, she said, “it’s really terrible, and it shouldn’t be on CNN”. I usually do what my wife tells me to do, so I’m blogging about it. It’s by Louann Brizendine, M.D., author of the absolutely awful controversial book The Female Brain, and now, its manly counterpart, The Male Brain. From what I can gather, the CNN article, which is titled Love, Sex, and the Male Brain, is a precis of Brizendine’s new book (though I have no intention of reading the book to make sure). The article is pretty short, so I’ll go through the first half of it paragraph-by-paragraph. But I’ll warn you right now that it isn’t pretty, and will likely anger anyone with even a modicum of training in psychology or neuroscience.

Although women the world over have been doing it for centuries, we can’t really blame a guy for being a guy. And this is especially true now that we know that the male and female brains have some profound differences.

Our brains are mostly alike. We are the same species, after all. But the differences can sometimes make it seem like we are worlds apart.

So far, nothing terribly wrong here, just standard pop psychology platitudes. But it goes quickly downhill.

The “defend your turf” area — dorsal premammillary nucleus — is larger in the male brain and contains special circuits to detect territorial challenges by other males. And his amygdala, the alarm system for threats, fear and danger is also larger in men. These brain differences make men more alert than women to potential turf threats.

As Vaughan notes over at Mind Hacks, the dorsal premammillary nucleus (PMD) hasn’t been identified in humans, so it’s unclear exactly what chunk of tissue Brizendine’s referring to–let alone where the evidence that there are gender differences in humans might come from. The claim that the PMD is a “defend your turf” area might be plausible, if oh, I don’t know, you happen to think that the way rats behave under narrowly circumscribed laboratory conditions when confronted by an aggressor is a good guide to normal interactions between human males. (Then again, given that PMD lesions impair rats from running away when exposed to a cat, Brizendine could just as easily have concluded that the dorsal premammillary nucleus is the “fleeing” part of the brain.)

The amygdala claim is marginally less ridiculous: it’s not entirely clear that the amygdala is “the alarm system for threats, fear and danger”, but at least that’s a claim you can make with a straight face, since it’s one fairly common view among neuroscientists. What’s not really defensible is the claim that larger amygdalae “make men more alert than women to potential turf threats”, because (a) there’s limited evidence that the male amygdala really is larger than the female amygdala and (b) if such a difference exists, it’s very small, and (c) it’s not clear in any case how you go from a small between-group difference to the notion that somehow the amygdala is the reason why men maintain little interpersonal fiefdoms and women don’t.

Meanwhile, the “I feel what you feel” part of the brain — mirror-neuron system — is larger and more active in the female brain. So women can naturally get in sync with others’ emotions by reading facial expressions, interpreting tone of voice and other nonverbal emotional cues.

This falls under the rubric of “not even wrong“. The mirror neuron system isn’t a single “part of the brain”; current evidence suggests that neurons that show mirroring properties are widely distributed throughout multiple frontoparietal regions. So I don’t really know what brain region Brizendine is referring to (the fact that she never cites any empirical studies in support of her claims is something of an inconvenience in that respect). And even if I did know, it’s a safe bet it wouldn’t be the “I feel what you feel” brain region, because, as far as I know, no such thing exists. The central claim regarding mirror neurons isn’t that they support empathy per se, but that they support a much more basic type of representation–namely, abstract conceptual (as opposed to sensory/motor) representation of actions. And even that much weaker notion is controversial; for example, Greg Hickok has a couple of recent posts (and a widely circulated paper) arguing against it. No one, as far as I know, has provided any kind of serious evidence linking the mirror neuron system to females’ (modestly) superior nonverbal decoding ability.

Perhaps the biggest difference between the male and female brain is that men have a sexual pursuit area that is 2.5 times larger than the one in the female brain. Not only that, but beginning in their teens, they produce 200 to 250 percent more testosterone than they did during pre-adolescence.

Maybe the silliest paragraph in the whole article. Not only do I not know what region Brizendine is talking about here, I have absolutely no clue what the “sexual pursuit area” might be. It could be just me, I suppose, but I just searched Google Scholar for “sexual pursuit area” and got… zero hits. Is it a visual region? A part of the hypothalamus? The notoriously grabby motor cortex hand area? No one knows, and Brizendine isn’t telling.  Off-hand, I don’t know of any region of the human brain that shows the degree of sexual dimorphism Brizendine claims here.

If testosterone were beer, a 9-year-old boy would be getting the equivalent of a cup a day. But a 15-year-old would be getting the equivalent of nearly two gallons a day. This fuels their sexual engines and makes it impossible for them to stop thinking about female body parts and sex.

If each fiber of chest hair was a tree, a 12-year-old boy would have a Bonsai sitting on the kitchen counter, and a 30-year-old man would own Roosevelt National Forest. What you’re supposed to learn from this analogy, I honestly couldn’t tell you. It’s hard for me to think clearly about trees and hair you see, seeing as how I find it impossible to stop thinking about female body parts while I’m trying to write this.

All that testosterone drives the “Man Trance”– that glazed-eye look a man gets when he sees breasts. As a woman who was among the ranks of the early feminists, I wish I could say that men can stop themselves from entering this trance. But the truth is, they can’t. Their visual brain circuits are always on the lookout for fertile mates. Whether or not they intend to pursue a visual enticement, they have to check out the goods.

To a man, this is the most natural response in the world, so he’s dismayed by how betrayed his wife or girlfriend feels when she sees him eyeing another woman. Men look at attractive women the way we look at pretty butterflies. They catch the male brain’s attention for a second, but then they flit out of his mind. Five minutes later, while we’re still fuming, he’s deciding whether he wants ribs or chicken for dinner. He asks us, “What’s wrong?” We say, “Nothing.” He shrugs and turns on the TV. We smolder and fear that he’ll leave us for another woman.

This actually isn’t so bad if you ignore the condescending “men are animals with no self-control” implication and pretend Brizendine had just made the  indisputably true but utterly banal observation that men, on average, like to ogle women more than women, on average, like to ogle men.

Not surprisingly, the different objectives that men and women have in mating games put us on opposing teams — at least at first. The female brain is driven to seek security and reliability in a potential mate before she has sex. But a male brain is fueled to mate and mate again. Until, that is, he mates for life.

So men are driven to sleep around, again and again… until they stop sleeping around. It’s tautological and profound at the same time!

Despite stereotypes to the contrary, the male brain can fall in love just as hard and fast as the female brain, and maybe more so. When he meets and sets his sights on capturing “the one,” mating with her becomes his prime directive. And when he succeeds, his brain makes an indelible imprint of her. Lust and love collide and he’s hooked.

Failure to operationalize complex construct of “love” in a measurable way… check. Total lack of evidence in support of claim that men and women are equally love-crazy… check. Oblique reference to Star Trek universe… check. What’s not to like?

A man in hot pursuit of a mate doesn’t even remotely resemble a devoted, doting daddy. But that’s what his future holds. When his mate becomes pregnant, she’ll emit pheromones that will waft into his nostrils, stimulating his brain to make more of a hormone called prolactin. Her pheromones will also cause his testosterone production to drop by 30 percent.

You know, on the off-chance that something like this is actually true, I think it’s actually kind of neat. But I just can’t bring myself to do a literature search, because I’m pretty sure I’ll discover that the jury is still out on whether humans even emit and detect pheromones (ok, I know this isn’t a completely baseless claim), or that there’s little to no evidence of a causal relationship between women releasing pheromones and testosterone levels dropping in men. I don’t like to be disappointed, you see; it turns out it’s much easier to just decide what you want to believe ahead of time and then contort available evidence to fit that view.

Anyway, we’re only half-way through the article; Brizendine goes on in similar fashion for several hundred more words. Highlights include the origin of male poker face, the conflation of correlation and causation in sociable elderly men, and the effects of oxytocin on your grandfather. You should go read the reset of it if you practice masochism; I’m too full of rage depressed to write about it any more.

Setting aside the blatant exercise in irresponsible scientific communication (Brizendine has an MD, and appears to be at least nominally affiliated with UCSF’s psychiatry department, so ignorance shouldn’t really be a valid excuse here), I guess what I’d really like to know is what goes through Brizendine’s mind when she writes this sort of dreck. Does she really believe the ludicrous claims she makes? Is she fully aware she’s grossly distorting the empirical evidence if not outright confabulating, and is simply in it for the money? Or does she rationalize it as a case of the ends justifying the means, thinking the message she’s presenting is basically right, so it’s ok if nearly all a few of the details go missing in the process?

I understand that presenting scientific evidence in an accurate and entertaining manner is a difficult business, and many people who work hard at it still get it wrong pretty often (I make mistakes in my posts here all the time!). But many scientists still manage to find time in their busy schedules to write popular science books that present the science in an accessible way without having to make up ridiculous stories just to keep the reader entertained (Steven Pinker, Antonio Damasio, and Dan Gilbert are just a few of the first ones that spring to mind). And then there are amazing science writers like Carl Zimmer and David Dobbs who don’t necessarily have any professional training in the areas they write about, but still put in the time and energy to make sure they get the details right, and consistently write stories that blow me away (the highest compliment I can pay to a science story is that it makes me think “I wish I studied that“, and Zimmer’s articles routinely do that). That type of intellectual honesty is essential, because there’s really no point in going to the trouble of doing most scientific research if people get to disregard any findings they disagree with on ideological or aesthetic grounds, or can make up any evidence they like to fit their claims.

The sad thing is that Brizendine’s new book will probably sell more copies in its first year out than Carl Zimmer’s entire back catalogue. And it’s not going to sell all those copies because it’s a careful meditation on the subtle differences between genders that scientists have uncovered; it’s going to fly off the shelves because it basically regurgitates popular stereotypes about gender differences with a seemingly authoritative scientific backing. Instead of evaluating and challenging many of those notions with actual empirical data, people who read Brizendine’s work will now get to say “science proves it!”, making it that much more difficult for responsible scientists and journalists to tell the public what’s really true about gender differences.

You might say (or at least, Brizendine might say) that this is all well and good, but hopelessly naive and idealistic, and that telling an accurate story is always going to be less important than telling the public what it wants to hear about science, because the latter is the only way to ensure continued funding for and interest in scientific research. This isn’t that uncommon a sentiment; I’ve even heard a number of scientists who I otherwise have a great deal of respect for say something like this. But I think Brizendine’s work underscores the typical outcome of that type of reasoning: once you allow yourself to relax the standards for what counts as evidence, it becomes quite easy to rationalize almost any rhetorical abuse of science, and ultimately you abuse the public’s trust while muddying the waters for working scientists.

As with so many other things, I think Richard Feynman summed up this sentiment best:

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing–and if they don’t want to support you under those circumstances, then that’s their decision.

No one doubts that men and women differ from one another, and the study of gender differences is an active and important area of psychology and neuroscience. But I can’t for the life of me see any merit in telling the public that men can’t stop thinking about breasts because they’re full of the beer-equivalent of two gallons of testosterone.

[Update 3/25: plenty of other scathing critiques pop up in the blogosphere today: Language Log, Salon, and Neuronarrative, and no doubt many others…]

Feynman’s first principle: on the virtue of changing one’s mind

As an undergraduate, I majored in philosophy. Actually, that’s not technically true: I came within one credit of double-majoring in philosophy and psychology, but I just couldn’t bring myself to take one more ancient philosophy course (a requirement for the major), so I ended up majoring in psychology and minoring in philosophy. But I still had to read a lot of philosophy, and one of my favorite works was Hilary Putnam’s Representation and Reality. The reason I liked it so much had nothing to do with the content (which, frankly, I remember nothing of), and everything to do with the introduction. Hilary Putnam was notorious for changing his mind about his ideas, a practice he defended this way in the introduction to Representation and Reality:

In this book I shall be arguing that the computer analogy, call it the “computational view of the mind,” or “functionalism,” or what you will, does not after all answer the question we philosophers (along with many cognitive scientists) want to answer, the question “What is the nature of mental states?” I am thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced. Strangely enough, there are philosophers who criticize me for doing this. The fact that I change my mind in philosophy has been viewed as a character defect. When I am lighthearted, I retort that it might be that I change my mind because I make mistakes, and that other philosophers don’t change their minds because they simply never make mistakes.

It’s a poignant way of pointing out the absurdity of a view that seemed to me at the time much too common in philosophy (and, which, I’ve since discovered, is also fairly common in science): that changing your mind is a bad thing, and conversely, that maintaining a consistent position on important issues is a virtue. I’ve never really understood this, since, by definition, any time you have at least two people with incompatible views in the same room, the odds must be at least 50% that any given view expressed at random must be wrong. In science, of course, there are rarely just two explanations for a given phenomenon. Ask 10 cognitive neuroscientists what they think the anterior cingulate cortex does, and you’ll probably get a bunch of different answers (though maybe not 10 of them). So the odds of any one person being right about anything at any given point in time are actually not so good. If you’re honest with yourself about that, you’re forced to conclude not only that most published research findings are false, but also that the vast majority of theories that purport to account for large bodies of evidence are false–or at least, wrong in some important ways.

The fact that we’re usually wrong when we make scientific (or philosophical) pronouncements isn’t a reason to abandon hope and give up doing science, of course; there are shades of accuracy, and even if it’s not realistic to expect to be right much of the time, we can at least strive to be progressively less wrong. The best expression of this sentiment that I know of an Isaac Asimov essay entitled The Relativity of Wrong. Asimov was replying to a letter from a reader who took offense to the fact that Asimov, in one of his other essays, “had expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”:

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. “If I am the wisest man,” said Socrates, “it is because I alone know that I know nothing.” the implication was that I was very foolish because I was under the impression I knew a great deal.

My answer to him was, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

The point being that scientific progress isn’t predicated on getting it right, but on getting it more right. Which seems reassuringly easy, except that that still requires us to change our minds about the things we believe in on occasion, and that’s not always a trivial endeavor.

In the years since reading Putnam’s introduction, I’ve come across a number of other related sentiments. One comes  from Richard Dawkins, in a fantastic 1996 Edge talk:

A formative influence on my undergraduate self was the response of a respected elder statesmen of the Oxford Zoology Department when an American visitor had just publicly disproved his favourite theory. The old man strode to the front of the lecture hall, shook the American warmly by the hand and declared in ringing, emotional tones: “My dear fellow, I wish to thank you. I have been wrong these fifteen years.” And we clapped our hands red. Can you imagine a Government Minister being cheered in the House of Commons for a similar admission? “Resign, Resign” is a much more likely response!

Maybe I’m too cynical, but I have a hard time imagining such a thing happening at any talk I’ve ever attended. But I’d like to believe that if it did, I’d also be clapping myself red.

My favorite piece on this theme, though, is without a doubt Richard Feyman’s “Cargo Cult Science” 1974 commencement address at Caltech. If you’ve never read it, you really should; it’s a phenomenally insightful, and simultaneously entertaining, assessment of the scientific process:

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

A little further along, Feynman is even more succinct, offering what I’d say might be the most valuable piece of scientific advice I’ve come across:

The first principle is that you must not fool yourself–and you are the easiest person to fool.

I really think this is the first principle, in that it’s the one I apply most often when analyzing data and writing up papers for publication. Am I fooling myself? Do I really believe the finding, irrespective of how many zeros the p value happens to contain? Or are there other reasons I want to believe the result (e.g., that it tells a sexy story that might make it into a high-impact journal) that might trump its scientific merit if I’m not careful? Decision rules abound in science–the most famous one in psychology being the magical p < .05 threshold. But it’s very easy to fool yourself into believing things you shouldn’t believe when you allow yourself to off-load your scientific conscience onto some numbers in a spreadsheet. And the more you fool yourself about something, the harder it becomes to change your mind later on when you come across some evidence that contradicts the story you’ve sold yourself (and other people).

Given how I feel about mind-changing, I suppose I should really be able to point to cases where I’ve changed my own mind about important things. But the truth is that I can’t think of as many as I’d like. Which is to say, I worry that the fact that I still believe so many of the things I believed 5 or 10 years ago means I must be wrong about most of them. I’d actually feel more comfortable if I changed my mind more often, because then at least I’d feel more confident that I was capable of evaluating the evidence objectively and changing my beliefs when change was warranted. Still, there are at least a few ideas I’ve changed my mind about, some of them fairly big ones. Here are a few examples of things I used to believe and don’t any more, for scientific reasons:

  • That libertarianism is a reasonable ideology. I used to really believe that people would be happiest if we all just butted out of each other’s business and gave each other maximal freedom to govern our lives however we see fit. I don’t believe that any more, because any amount of empirical evidence has convinced me that libertarianism just doesn’t (and can’t) work in practice, and is a worldview that doesn’t really have any basis in reality. When we’re given more information and more freedom to make our choices, we generally don’t make better decisions that make us happier; in fact, we often make poorer decisions that make us less happy. In general, human beings turn out to be really outstandingly bad at predicting the things that really make us happy–or even evaluating how happy the things we currently have make us. And the notion of personal responsibility that libertarians stress turns out to have very limited applicability in practice, because so much of the choices we make aren’t under our direct control in any meaningful sense (e.g., because the bulk of variance in our cognitive abilities and personalities are inherited from our parents, or because subtle contextual cues influence our choices without our knowledge, and often, to our detriment). So in the space of just a few years, I’ve gone from being a libertarian to basically being a raving socialist. And I’m not apologetic about that, because I think it’s what the data support.
  • That we should stress moral education when raising children. The reason I don’t believe this any more is much the same as the above: it turns out that children aren’t blank slates to be written on as we see fit. The data clearly show that post-conception, parents have very limited capacity to influence their children’s behavior or personality. So there’s something to be said for trying to provide an environment that makes children basically happy rather than one that tries to mould them into the morally upstanding little people they’re almost certain to turn into no matter what we do or don’t do.
  • That DLPFC is crucially involved in some specific cognitive process like inhibition or maintenance or manipulation or relational processing or… you name it. At various points in time, I’ve believed a number of these things. But for reasons I won’t go into, I now think the best characterization is something very vague and non-specific like “abstract processing” or “re-representation of information”. That sounds unsatisfying, but no one said the truth had to be satisfying on an intuitive level. And anyway, I’m pretty sure I’ll change my view about this many more times in future.
  • That there’s a general factor of intelligence. This is something I’ve been meaning to write about here for a while now (UPDATE: and I have now, here), and will hopefully get around to soon. But if you want to know why I don’t think g is real, read this explanation by Cosma Shalizi, which I think presents a pretty open-and-shut case.

That’s not a comprehensive list, of course; it’s just the first few things I could think of that I’ve changed my mind about. But it still bothers me a little bit that these are all things that I’ve never taken a public position on in any published article (or even on this blog). After all, it’s easy to change your mind when no one’s watching. Ego investment usually stems from telling other people what you believe, not from thinking out loud to yourself when you’re pacing around the living room. So I still worry that the fact I’ve never felt compelled to say “I used to think… but I now think” about any important idea I’ve asserted publicly means I must be fooling myself. And if there’s one thing that I unfailingly believe, it’s that I’m the easiest person to fool…

[For another take on the virtues of mind-changing, see Mark Crislip’s “Changing Your Mind“, which provided the impetus for this post.]

the genetics of dog hair

Aside from containing about eleventy hundred papers on Ardi–our new 4.4 million year-old ancestor–this week’s issue of Science has an interesting article on the genetics of dog hair. What is there to know about dog hair, you ask? Well, it turns out that nearly all of the phenotypic variation in dog coats (curly, shaggy, short-haired, etc.) is explained by recent mutations in just three genes. It’s another beautiful example of how complex phenotypes can emerge from relatively small genotypic differences. I’d tell you much more about it, but I’m very lazy busy right now. For more explanation, see here, here, and here (you’re free to ignore the silly headline of that last article). Oh, and here’s a key figure from the paper. I’ve heard that a picture is worth a thousand words, which effectively makes this a 1200-word post. All this writing is hurting my brain, so I’ll stop now.

a tale of dogs, their coats, and three genetic mutations
a tale of dogs, their coats, and three genetic mutations