There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and sufficient reason to knowingly behave in an antisocial way, just as long as one first acknowledges the existence of those perverse incentives. The way this dynamic usually unfolds is that someone points out some fairly serious problem with the way many scientists behave—say, our collective propensity to p-hack as if it’s going out of style, or the fact that we insist on submitting our manuscripts to publishers that are actively trying to undermine our interests—and then someone else will say, “I know, right—but what are you going to do, those are the incentives.”
As best I can tell, the words “it’s the incentives” are magic. Once they’re uttered by someone, natural law demands that everyone else involved in the conversation immediately stop whatever else they were doing, solemnly nod, and mumble something to the effect that, yes, the incentives are very bad, very bad indeed, and it’s a real tragedy that so many smart, hard-working people are being crushed under the merciless, gigantic boot of The System. Then there’s usually a brief pause, and after that, everyone goes back to discussing whatever they were talking about a moment earlier.
Perhaps I’m getting senile in my early middle age, but my anecdotal perception is that it used to be that, when somebody pointed out to a researcher that they might be doing something questionable, that researcher would typically either (a) argue that they weren’t doing anything questionable (often incorrectly, because there used to be much less appreciation for some of the statistical issues involved), or (b) look uncomfortable for a little while, allow an awkward silence to bloom, and then change the subject. In the last few years, I’ve noticed that uncomfortable discussions about questionable practices disproportionately seem to end with a chuckle or shrug, followed by a comment to the effect that we are all extremely sophisticated human beings who recognize the complexity of the world we live in, and sure it would be great if we lived in a world where one didn’t have to occasionally engage in shenanigans, but that would be extremely naive, and after all, we are not naive, are we?
There is, of course, an element of truth to this kind of response. I’m not denying that perverse incentives exist; they obviously do. There’s no question that many aspects of modern scientific culture systematically incentivize antisocial behavior, and I don’t think we can or should pretend otherwise. What I do object to quite strongly is the narrative that scientists are somehow helpless in the face of all these awful incentives—that we can’t possibly be expected to take any course of action that has any potential, however small, to impede our own career development.
“I would publish in open access journals,” your friendly neighborhood scientist will say. “But those have a lower impact factor, and I’m up for tenure in three years.”
Or: “if I corrected for multiple comparisons in this situation, my effect would go away, and then the reviewers would reject the paper.”
Or: “I can’t ask my graduate students to collect an adequately-powered replication sample; they need to publish papers as quickly as they can so that they can get a job.”
There are innumerable examples of this kind, and they’ve become so routine that it appears many scientists have stopped thinking about what the words they’re saying actually mean, and instead simply glaze over and nod sagely whenever the dreaded Incentives are invoked.
A random bystander who happened to eavesdrop on a conversation between a group of scientists kvetching about The Incentives could be forgiven for thinking that maybe, just maybe, a bunch of very industrious people who generally pride themselves on their creativity, persistence, and intelligence could find some way to work around, or through, the problem. And I think they would be right. The fact that we collectively don’t see it as a colossal moral failing that we haven’t figured out a way to get our work done without having to routinely cut corners in the rush for fame and fortune is deeply troubling.
It’s also aggravating on an intellectual level, because the argument that we’re all being egregiously and continuously screwed over by The Incentives is just not that good. I think there are a lot of reasons why researchers should be very hesitant to invoke The Incentives as a justification for why any of us behave the way we do. I’ll give nine of them here, but I imagine there are probably others.
1. You can excuse anything by appealing to The Incentives
No, seriously—anything. Once you start crying that The System is Broken in order to excuse your actions (or inactions), you can absolve yourself of responsibility for all kinds of behaviors that, on paper, should raise red flags. Consider just a few behaviors that few scientists would condone:
- Fabricating data or results
- Regulary threatening to fire trainees in order to scare them into working harder
- Deliberately sabotaging competitors’ papers or grants by reviewing them negatively
I think it’s safe to say most of us consider such practices to be thoroughly immoral, yet there are obviously people who engage in each of them. And when those people are caught or confronted, one of the most common justifications they fall back on is… you guessed it: The Incentives! When Diederik Stapel confessed to fabricating the data used in over 50 publications, he didn’t explain his actions by saying “oh, you know, I’m probably a bit of a psychopath”; instead, he placed much of the blame squarely on The Incentives:
I did not withstand the pressure to score, to publish, the pressure to get better in time. I wanted too much, too fast. In a system where there are few checks and balances, where people work alone, I took the wrong turn. I want to emphasize that the mistakes that I made were not born out of selfish ends.
Stapel wasn’t acting selfishly, you see… he was just subject to intense pressures. Or, you know, Incentives.
Or consider these quotes from a New York Times article describing Stapel’s unraveling:
In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. “They are actually telling you: ‘Leave out this stuff. Make it simpler,'” Stapel told me. Before long, he was striving to write elegant articles.
…
The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.
Reading through such accounts, it’s hard to avoid the conclusion that Stapel’s self-narrative is strikingly similar to the one that gets tossed out all the time on social media, or in conference bar conversations: here I am, a good scientist trying to do an honest job, and yet all around me is a system that incentivizes deception and corner-cutting. What do you expect me to do?.
Curiously, I’ve never heard any of my peers—including many of the same people who are quick to invoke The Incentives to excuse their own imperfections—seriously endorse The Incentives as an acceptable justification for Stapel’s behavior. In Stapel’s case, the inference we overwhelmingly jump to is that there must be something deeply wrong with Stapel, seeing as the rest of us also face the same perverse incentives on a daily basis, yet we somehow manage to get by without fabricating data. But this conclusion should make us a bit uneasy, I think, because if it’s correct (and I think it is), it implies that we aren’t really such slaves to The Incentives after all. When our morals get in the way, we appear to be perfectly capable of resisting temptation. And I mean, it’s not even like it’s particularly difficult; I doubt many researchers actively have to fight the impulse to manipulate their data, despite the enormous incentives to do so. I submit that the reason many of us feel okay doing things like reporting exploratory results as confirmatory results, or failing to mention that we ran six other studies we didn’t report, is not really that The Incentives are forcing us to do things we don’t like, but that it’s easier to attribute our unsavory behaviors to unstoppable external forces than to take responsibility for them and accept the consequences.
Needless to say, I think this kind of attitude is fundamentally hypocritical. If we’re not comfortable with pariahs like Stapel blaming The Incentives for causing them to fabricate data, we shouldn’t use The Incentives as an excuse for doing things that are on the same spectrum, albeit less severe. If you think that what the words “I did not withstand the pressure to score” really mean when they fall out of Stapel’s mouth is something like “I’m basically a weak person who finds the thought of not being important so intolerable I’m willing to cheat to get ahead”, then you shouldn’t give yourself a free pass just because when you use that excuse, you’re talking about much smaller infractions. Consider the possibility that maybe, just like Stapel, you’re actually appealing to The Incentives as a crutch to avoid having to make your life very slightly more difficult.
2. It would break the world if everyone did it
When people start routinely accepting that The System is Broken and The Incentives Are Fucking Us Over, bad things tend to happen. It’s very hard to have a stable, smoothly functioning society once everyone believes (rightly or wrongly) that gaming the system is the only way to get by. Imagine if every time you went to your doctor—and I’m aware that this analogy won’t work well for people living outside the United States—she sent you to get a dozen expensive and completely unnecessary medical tests, and then, when prompted for an explanation, simply shrugged and said “I know I’m not an angel—but hey, them’s The Incentives.” You would be livid—even though it’s entirely true (at least in the United States; other developed countries seem to have figured this particular problem out) that many doctors have financial incentives to order unnecessary tests.
To be clear, I’m not saying perverse incentives never induce bad behavior in medicine or other fields. Of course they do. My point is that practitioners in other fields at least appear to have enough sense not to loudly trumpet The Incentives as a reasonable justification for their antisocial behavior—or to pat themselves on the back for being the kind of people who are clever enough to see the fiendish Incentives for exactly what they are. My sense is that when doctors, lawyers, journalists, etc. fall prey to The Incentives, they generally consider that to be a source of shame. I won’t go so far as to suggest that we scientists take pride in behaving badly—we obviously don’t—but we do seem to have collectively developed a rather powerful form of learned helplessness that doesn’t seem to be matched by other communities. Which is a fortunate thing, because if every other community also developed the same attitude, we would be in a world of trouble.
3. You are not special
Individual success in science is, to a first approximation, a zero-sum game—at least in the short term. many scientists who appeal to The Incentives seem to genuinely believe that opting out of doing the right thing is a victimless crime. I mean, sure, it might make the system a bit less efficient overall… but that’s just life, right? It’s not like anybody’s actually suffering.
Well yeah, people actually do suffer. There are many scientists who are willing to do the right things—to preregister their analysis plans, to work hard to falsify rather than confirm their hypotheses, to diligently draw attention to potential confounds that complicate their preferred story, and so on. When you assert your right to opt out of these things because apparently your publications, your promotions, and your students are so much more important than everyone else’s, you’re cheating those people.
No, really, you are. If you don’t like to think of yourself as someone who cheats other people, don’t reflexively collapse on a crutch made out of stainless steel Incentives any time someone questions your process. You are not special. Your publications, job, and tenure are not more important than other people’s. The fact that there are other people in your position engaging in the same behaviors doesn’t mean you and your co-authors are all very sophisticated, and that the people who refuse to cut corners are naive simpletons. What it actually demonstrates is that, somewhere along the way, you developed the reflexive ability to rationalize away behavior that you would disapprove of in others and that, viewed dispassionately, is clearly damaging to science.
4. You (probably) have no data
It’s telling that appeals to The Incentives are rarely supported by any actual data. It’s simply taken for granted that engaging in the practice in question would be detrimental to one’s career. The next time you’re tempted to blame The System for making you do bad things, you might want to ask yourself this: Do you actually know that, say, publishing in PLOS ONE rather than [insert closed society journal of your choice] would hurt your career? If so, how do you know that? Do you have any good evidence for it, or have you simply accepted it as stylized fact?
Coming by the kind of data you’d need to answer this question is actually not that easy: it’s not enough to reflexively point to, say, the fact that some journals have higher impact factors than others, To identify the utility-maximizing course of action, you’d need to integrate over both benefits and costs, and the costs are not always so obvious. For example, the opportunity cost of submitting your paper to a “good” journal will be offset to some extent by the likelihood of faster publication (no need to spend two years racking up rejections at high-impact venues), by the positive image you send to at least some of your peers that you support open scientific practices, and so on.
I’m not saying that a careful consideration of the pros and cons of doing the right thing would usually lead people to change their minds. It often won’t. What I’m saying is that people who blame The Incentives for forcing them to submit their papers to certain journals, to tell post-hoc stories about their work, or to use suboptimal analytical methods don’t generally support their decisions with data, or even with well-reasoned argument. The defense is usually completely reflexive—which should raise our suspicion that it’s also just a self-serving excuse.
5. It (probably) won’t matter anyway
This one might hurt a bit, but I think it’s important to consider—particularly for early-career researchers. Let’s suppose you’re right that doing the right thing in some particular case would hurt your career. Maybe it really is true that if you comprehensively report in your paper on all the studies you ran, and not just the ones that “worked”, your colleagues will receive your work less favorably. In such cases it may seem natural to think that there has to be a tight relationship between the current decision and the global outcome—i.e., that if you don’t drop the failed studies, you won’t get a tenure-track position three years down the road. After all, you’re focusing on that causal relationship right now, and it seems so clear in your head!
Unfortunately (or perhaps fortunately?), reality doesn’t operate that way. Outcomes in academia are multiply determined and enormously complex. You can tell yourself that getting more papers out faster will get you a job if it makes you feel better, but that doesn’t make it true. If you’re a graduate student on the job market these days, I have sad news for you: you’re probably not getting a tenure-track job no matter what you do. It doesn’t matter how many p-hacked papers you publish, or how thinly you slice your dissertation into different “studies”; there are not nearly enough jobs to go around for everyone who wants one.
Suppose you’re right, and your sustained pattern of corner-cutting is in fact helping you get ahead. How far ahead do you think it’s helping you get? Is it taking you from a 3% chance of getting a tenure-track position at an R1 university to an 80% chance? Almost certainly not. Maybe it’s increasing that probability from 7% to 11%; that would still be a non-trivial relative increase, but it doesn’t change the fact that, for the average grad student, there is no full-time faculty position waiting at the end of the road. Despite what the environment around you may make you think, the choice most graduate students and postdocs face is not actually between (a) maintaining your integrity and “failing” out of science or (b) cutting a few corners and achieving great fame and fortune as a tenured professor. The Incentives are just not that powerful. The vastly more common choice you face as a trainee is between (a) maintaining your integrity and having a pretty low chance of landing a permanent research position, or (b) cutting a bunch of corners that threaten the validity of your work and having a slightly higher (but still low in absolute terms) chance of landing a permanent research position. And even that’s hardly guaranteed, because you never know when there’s someone on a hiring committee who’s going to be turned off by the obvious p-hacking in your work.
The point is, the world is complicated, and as a general rule, very few things—including the number of publications you produce—are as important as they seem to be when you’re focusing on them in the moment. If you’re an early-career researcher and you regularly find yourself strugging between doing what’s right and doing what isn’t right but (you think) benefits your career, you may want to take a step back and dispassionately ask yourself whether this integrity versus expediency conflict is actually a productive way to frame things. Instead, consider the alternative framing I suggested above: you are most likely going to leave academia eventually, no matter what you do, so why not at least try to see the process through with some intellectual integrity? And I mean, if you’re really so convinced that The System is Broken, why would you want to stay in it anyway? Do you think standards are going to change dramatically in the next few years? Are you laboring under the impression that you, of all people, are going to somehow save science?
This brings us directly to the next point…
6. You’re (probably) not going to “change things from the inside”
Over the years, I’ve talked to quite a few early-career researchers who have told me that while they can’t really stop engaging in questionable research practices right now without hurting their career, they’re definitely going to do better once they’re in a more established position. These are almost invariably nice, well-intentioned people, and I don’t doubt that they genuinely believe what they say. Unfortunately, what they say is slippery, and has a habit of adapting to changing circumstances. As a grad student or postdoc, it’s easy to think that once you get a faculty position, you’ll be able to start doing research the “right” way. But once you get a faculty position, it then turns out you need to get papers and grants in order to get tenure (I mean, who knew?), so you decide to let the dreaded Incentives win for just a few more years. And then, once you secure tenure, well, now the problem is that your graduate students also need jobs, just like you once did, so you can’t exactly stop publishing at the same rate, can you? Plus, what would all your colleagues think if you effectively said, “oh, you should all treat the last 15 years of my work with skepticism—that was just for tenure”?
I’m not saying there aren’t exceptions. I’m sure there are. But I can think of at least a half-dozen people off-hand who’ve regaled me with me some flavor of “once I’m in a better position” story, and none of them, to my knowledge, have carried through on their stated intentions in a meaningful way. And I don’t find this surprising: in most walks of life, course correction generally becomes harder, not easier, the longer you’ve been traveling on the wrong bearing. So if part of your unhealthy respect for The Incentives is rooted in an expectation that those Incentives will surely weaken their grip on you just as soon as you reach the next stage of your career, you may want to rethink your strategy. The Incentives are not going to dissipate as you move up the career ladder; if anything, you’re probably going to have an increasingly difficult time shrugging them off.
7. You’re not thinking long-term
One of the most frustrating aspects of appeals to The Incentives is that they almost invariably seem to focus exclusively on the short-to-medium term. But the long term also matters. And there, I would argue that The Incentives very much favor a radically different—and more honest—approach to scientific research. To see this, we need only consider the ongoing “replication crisis” in many fields of science. One thing that I think has been largely overlooked in discussions about the current incentive structure of science is what impact the replication crisis will have on the legacies of a huge number of presently famous scientists.
I’ll tell you what impact it will have: many of those legacies will be completely zeroed out. And this isn’t just hypothetical scaremongering. It’s happening right now to many former stars of psychology (and, I imagine, other fields I’m less familiar with). There are many researchers we can point to right now who used to be really famous (like, major-chunks-of-the-textbook famous), are currently famous-with-an-asterisk, and will in all likelihood, be completely unknown again within a couple of decades. The unlucky ones are probably even fated to become infamous—their entire scientific legacies eventually reduced to footnotes in cautionary histories illustrating how easily entire areas of scientific research can lose their footing when practitioners allow themselves to be swept away by concerns about The Incentives.
You probably don’t want this kind of thing to happen to you. I’m guessing you would like to retire with at least some level of confidence that your work, while maybe not Earth-shattering in its implications, isn’t going to be tossed on the scrap heap of history one day by a new generation of researchers amazed at how cavalier you and your colleagues once were about silly little things like “inferential statistics” and “accurate reporting”. So if your justification for cutting corners is that you can’t otherwise survive or thrive in the present environment, you should consider the prospect—and I mean, really take some time to think about it—that any success you earn within the next 10 years by playing along with The Incentives could ultimately make your work a professional joke within the 20 years after that.
8. It achieves nothing and probably makes things worse
Hey, are you a scientist? Yes? Great, here’s a quick question for you: do you think there’s any working scientist on Planet Earth who doesn’t already know that The Incentives are fucked up? No? I didn’t think so. Which means you really don’t need to keep bemoaning The Incentives; I promise you that you’re not helping to draw much-needed attention to an important new problem nobody’s recognized before. You’re not expressing any deep insight by pointing out that hiring committees prefer applicants with lots of publications in high-impact journals to applicants with a few publications in journals no one’s ever heard of. If your complaints are achieving anything at all, they’re probably actually making things worse by constantly (and incorrectly) reminding everyone around you about just how powerful The Incentives are.
Here’s a suggestion: maybe try not talking about The Incentives for a while. You could even try, I don’t know, working against The Incentives for a change. Or, if you can’t do that, just don’t say anything at all. Probably nobody will miss anything, and the early-career researchers among us might even be grateful for a respite from their senior colleagues’ constant reminder that The System—the very same system those senior colleagues are responsible for creating!—is so fucked up.
9. It’s your job
This last one seems so obvious it should go without saying, but it does need saying, so I’ll say it: a good reason why you should avoid hanging bad behavior on The Incentives is that you’re a scientist, and trying to get closer to the truth, and not just to tenure, is in your fucking job description. Taxpayers don’t fund you because they care about your career; they fund you to learn shit, cure shit, and build shit. If you can’t do your job without having to regularly excuse sloppiness on the grounds that you have no incentive to be less sloppy, at least have the decency not to say that out loud in a crowded room or Twitter feed full of people who indirectly pay your salary. Complaining that you would surely do the right thing if only these terrible Incentives didn’t exist doesn’t make you the noble martyr you think it does; to almost anybody outside your field who has a modicum of integrity, it just makes you sound like you’re looking for an easy out. It’s not sophisticated or worldly or politically astute, it’s just dishonest and lazy. If you find yourself unable to do your job without regularly engaging in practices that clearly devalue the very science you claim to care about, and this doesn’t bother you deeply, then maybe the problem is not actually The Incentives—or at least, not The Incentives alone. Maybe the problem is You.