what I’ve learned from a failed job search

For the last few months, I’ve been getting a steady stream of emails in my inbox that go something like this:

Dear Dr. Yarkoni,

We recently concluded our search for the position of Assistant Grand Poobah of Academic Sciences in the Area of Multidisciplinary Widget Theory. We received over seventy-five thousand applications, most of them from truly exceptional candidates whose expertise and experience would have been welcomed with open arms at any institution of higher learning–or, for that matter, by the governing board of a small planet. After a very careful search process (which most assuredly did not involve a round or two on the golf course every afternoon, and most certainly did not culminate in a wild injection of an arm into a hat filled with balled-up names) we regret to inform you that we are unable to offer you this position. This should not be taken to imply that your intellectual ability or accomplishments are in any way inferior to those of the person who we ultimately did offer the position to (or rather, persons–you see, we actually offered the job to six people before someone accepted it); what we were attempting to optimize, we hope you understand, was not the quality of the candidate we hired, but a mythical thing called ‘fit’ between yourself and ourselves. Or, to put it another way, it’s not you, it’s us.

We wish you all the best in your future endeavors, and rest assured that if we have another opening in future, we will celebrate your reapplication by once again balling your name up and tossing it into a hat along with seventy-five thousand others.

These letters are typically so warm and fuzzy that it’s hard to feel bad about them. I mean, yes, they’re basically telling me I failed at something, but then, how often does anyone ever actually tell me I’m an impressive, accomplished, human being? Never! If every failure in my life was accompanied by this kind of note, I’d be much more willing to try new things. Though, truth be told, I probably wouldn’t try very hard at anything; it would be worth failing in advance just to get this kind of affirmation.

Anyway, the reason I’ve been getting these letters, as you might surmise, is that I’ve been applying for academic jobs. I’ve been doing this for two years now, and will be doing it for a third year in a row in a few months, which I’m pretty sure qualifies me a world-recognized expert on the process. So in the interest of helping other people achieve the same prowess at failing to secure employment, I’ve decided to share some of the lessons I’ve learned here. This missive comes to you with all of the standard caveats and qualifiers–like, for example, that you should be sitting down when you read this; that you should double-check with people you actually respect to make sure any of this makes sense; and, most importantly, that you’ve completely lost your mind if you try to actually apply any of this ‘knowledge’ to your own personal situation. With that in mind, Here’s What I’ve Learned:

1. The academic job market is really, really, bad. No, seriously. I’ve heard from people at several major research universities that they received anywhere from 150 to 500 applications for individual positions (the latter for open-area positions). Some proportion of these applications come from people who have no real shot at the position, but a huge proportion are from truly exceptional candidates with many, many publications, awards, and glowing letters of recommendation. People who you would think have a bright future in research ahead of them. Except that many of them don’t actually have a bright future in research ahead of them, because these days all of that stuff apparently isn’t enough to land a tenure-track position–and often, isn’t even enough to land an interview.

Okay, to be fair, the situation isn’t quite that bad across the board. For one thing, I was quite selective about my job search this past year. I applied for 22 positions, which may sound like a lot, but there were a lot of ads last year, and I know people with similar backgrounds to mine who applied to 50 – 80 positions and could have expanded their searches still further. So, depending on what kind of position you’re aiming for–particularly if you’re interested in a teaching-heavy position at a small school–the market may actually be quite reasonable at the moment. What I’m talking about here really only applies to people looking for research-intensive positions at major research universities. And specifically, to people looking for jobs primarily in the North American market. I recognize that’s probably a minority of people graduating with PhDs in psychology, but since it’s my blog, you’ll have to live with my peculiar little biases. With that qualifier in mind, I’ll reiterate again: the market sucks right now.

2. I’m not as awesome as I thought I was. Lest you think I’ve suddenly turned humble, let me reassure you that I still think I’m pretty awesome–and I can back that up with hard evidence, because I currently have about 20 emails in my inbox from fancy-pants search committee members telling me what a wonderful, accomplished human being I am. I just don’t think I’m as awesome as I thought I was a year ago. Mind you, I’m not quite so delusional that I expected to have my choice of jobs going in, but I did think I had a decent enough record–twenty-odd publications, some neat projects, a couple of major grant proposals submitted (and one that looks very likely to get funded)–to land at least one or two interviews. I was wrong. Which means I’ve had to take my ego down a peg or two. On balance, that’s probably not a bad thing.

3. It’s hard to get hired without a conventional research program. Although I didn’t get any interviews, I did hear back informally from a couple of places (in addition to those wonderful form letters, I mean), and I’ve had hallway conversations with many people who’ve sat on search committees before. The general feedback has been that my work focuses too much on methods development and not enough on substantive questions. This doesn’t really come as a surprise; back when I was putting together my research statement and application materials, pretty much everyone I talked to strongly advised me to focus on a content area first and play down my methods work, because, they said, no one really hires people who predominantly work on methods–at least in psychology. I thought (and still think) this is excellent advice, and in fact it’s exactly the same advice I give to other people if they make the mistake of asking me for my opinion. But ultimately, I went ahead and marketed myself as a methods person anyway. My reasoning was that I wouldn’t want to show up for a new job having sold myself as a person who does A, B, and C, and then mostly did X, Y, and Z, with only a touch of A thrown in. Or, you know, to put it in more cliched terms, I want people to like me for meeeeeee.

I’m still satisfied with this strategy, even if it ends up costing me a few interviews and a job offer or two (admittedly, this is a bit presumptuous–more likely than not, I wouldn’t have gotten any interviews this time around no matter how I’d framed my application). I do the kind of work I do because I enjoy it and think it’s important; I’m pretty happy where I am, so I don’t feel compelled to–how can I put this diplomatically–fib to search committees. Which isn’t to say that I’m laboring under any illusion that you always have to be completely truthful when applying for jobs; I’m fully aware that selling yourself framing your application around your strengths–and telling people what they want to hear to some extent–is a natural and reasonable thing to do. So I’m not saying this out of any bitterness or naivete; I’m just explaining why I chose to go the honest route that was unlikely to land me a job as opposed to the slightly less honest route that was very slightly more likely to land me a job.

4. There’s a large element of luck involved in landing an academic job. Or, for that matter, pretty much any other kind of job. I’m not saying it’s all luck, of course; far from it. In practice, a single group of maybe three dozen people seem end up filling the bulk of interview slots at major research universities in any given year. Which is to say, while the majority of applicants will go without any interviews at all, some people end up with a dozen or more of them. So it’s clearly very far from a random process; in the long run, better candidates are much more likely to get jobs. But for any given job, the odds of getting an interview and/or job offer depend on any number of factors that you have little or no control over: what particular area the department wants to shore up; what courses need to be taught; how your personality meshes with the people who interview you; which candidate a particular search committee member idiosyncratically happens to take a shining to, and so on. Over the last few months, I’ve found it useful to occasionally remind myself of this fact when my inbox doth overfloweth with rejection letters. Of course, there’s a very thin line between justifiably attributing your negative outcomes to bad luck and failing to take responsibility for things that are under your control, so it’s worth using the power of self-serving rationalization sparingly.

 

In any case, those vacuous observations lessons aside, my plan at this point is still to keep doing essentially the same thing I’ve done the last two years, which consists of (i) putting together what I hope is a strong, if somewhat unconventional, application package; (ii) applying for jobs very selectively–only to places that I think I’d be as happy or happier at than I am in my current position; and (iii) otherwise spending as little of my time as possible thinking about my future employment status, and as much of it as possible concentrating on my research and personal life.

I don’t pretend to think this is a good strategy in general; it’s just what I’ve settled on and am happy with for the moment. But ask me again a year from now and who knows, maybe I’ll be roaming around downtown Boulder fishing quarters out of the creek for lunch money. In the meantime, I hope this rather uneventful report of my rather uneventful job-seeking experience thus far is of some small use to someone else. Oh, and if you’re on a search committee and think you want to offer me a job, I’m happy to negotiate the terms of my employment in the comments below.

Big Pitch or Big Lottery? The unenviable task of evaluating the grant review system

This week’s issue of Science has an interesting article on The Big Pitch–a pilot NSF initiative to determine whether anonymizing proposals and dramatically cutting down their length (from 15 pages to 2) has a substantial impact on the results of the review process. The answer appears to be an unequivocal yes. From the article:

What happens is a lot, according to the first two rounds of the Big Pitch. NSF’s grant reviewers who evaluated short, anonymized proposals picked a largely different set of projects to fund compared with those chosen by reviewers presented with standard, full-length versions of the same proposals.

Not surprisingly, the researchers who did well under the abbreviated format are pretty pleased:

Shirley Taylor, an awardee during the evolution round of the Big Pitch, says a comparison of the reviews she got on the two versions of her proposal convinced her that anonymity had worked in her favor. An associate professor of microbiology at Virginia Commonwealth University in Richmond, Taylor had failed twice to win funding from the National Institutes of Health to study the role of an enzyme in modifying mitochondrial DNA.

Both times, she says, reviewers questioned the validity of her preliminary results because she had few publications to her credit. Some reviews of her full proposal to NSF expressed the same concern. Without a biographical sketch, Taylor says, reviewers of the anonymous proposal could “focus on the novelty of the science, and this is what allowed my proposal to be funded.”

Broadly speaking, there are two ways to interpret the divergent results of the standard and abbreviated review. The charitable interpretation is that the change in format is, in fact, beneficial, inasmuch as it eliminates prior reputation as one source of bias and forces reviewers to focus on the big picture rather than on small methodological details. Of course, as Prof-Like Substance points out in an excellent post, one could mount a pretty reasonable argument that this isn’t necessarily a good thing. After all, a scientist’s past publication record is likely to be a good predictor of their future success, so it’s not clear that proposals should be anonymous when large amounts of money are on the line (and there are other ways to counteract the bias against newbies–e.g., NIH’s approach of explicitly giving New Investigators a payline boost until they get their first R01). And similarly, some scientists might be good at coming up with big ideas that sound plausible at first blush and not so good at actually carrying out the research program required to bring those big ideas to fruition. Still, at the very least, if we’re being charitable, The Big Pitch certainly does seem like a very different kind of approach to review.

The less charitable interpretation is that the reason the ratings of the standard and abbreviated proposals showed very little correlation is that the latter approach is just fundamentally unreliable. If you suppose that it’s just not possible to reliably distinguish a very good proposal from a somewhat good one on the basis of just 2 pages, it makes perfect sense that 2-page and 15-page proposal ratings don’t correlate much–since you’re basically selecting at random in the 2-page case. Understandably, researchers who happen to fare well under the 2-page format are unlikely to see it that way; they’ll probably come up with many plausible-sounding reasons why a shorter format just makes more sense (just like most researchers who tend to do well with the 15-page format probably think it’s the only sensible way for NSF to conduct its business). We humans are all very good at finding self-serving rationalizations for things, after all.

Personally I don’t have very strong feelings about the substantive merits of short versus long-format review–though I guess I do find it hard to believe that 2-page proposals could be ranked very reliably given that some very strange things seem to happen with alarming frequency even with 12- and 15-page proposals. But it’s an empirical question, and I’d love to see relevant data. In principle, the NSF could have obtained that data by having two parallel review panels rate all of the 2-page proposals (or even 4 panels, since one would also like to know how reliable the normal review process is). That would allow the agency to directly quantify the reliability of the ratings by looking at their cross-panel consistency. Absent that kind of data, it’s very hard to know whether the results Science reports on are different because 2-page review emphasizes different (but important) things, or because a rating process based on an extended 2-page abstract just amounts to a glorified lottery.

Alternatively, and perhaps more pragmatically, NSF could just wait a few years to see how the projects funded under the pilot program turn out (and I’m guessing this is part of their plan). I.e., do the researchers who do well under the 2-page format end producing science as good as (or better than) the researchers who do well under the current system? This sounds like a reasonable approach in principle, but the major problem is that we’re only talking about a total of ~25 funded proposals (across two different review panels), so it’s unclear that there will be enough data to draw any firm conclusions. Certainly many scientists (including me) are likely to feel a bit uneasy at the thought that NSF might end up making major decisions about how to allocate billions of dollars on the basis of two dozen grants.

Anyway, skepticism aside, this isn’t really meant as a criticism of NSF so much as an acknowledgment of the fact that the problem in question is a really, really difficult one. The task of continually evaluating and improving the grant review process is not one anyone should want to take on lightly. If time and money were no object, every proposed change (like dramatically shortened proposals) would be extensively tested on a large scale and directly compared to the current approach before being implemented. Unfortunately, flying thousands of scientists to Washington D.C. is a very expensive business (to say nothing of all the surrounding costs), and I imagine that testing out a substantively different kind of review process on a large scale could easily run into the tens of millions of dollars. In a sense, the funding agencies can’t really win. On the one hand, if they only ever pilot new approaches on a small scale, they never get enough empirical data to confidently back major changes in policy. On the other hand, if they pilot new approaches on a large scale and those approaches end up failing to improve on the current system (as is the fate of most innovative new ideas), the funding agencies get hammered by politicians and scientists alike for wasting taxpayer money in an already-harsh funding climate.

I don’t know what the solution is (or if there is one), but if nothing else, I do think it’s a good thing that NSF and NIH continue to actively tinker with their various processes. After all, if there’s anything most researchers can agree on, it’s that the current system is very far from perfect.