Induction is not optional (if you’re using inferential statistics): reply to Lakens

A few months ago, I posted an online preprint titled The Generalizability Crisis. Here’s the abstract:

Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the two must refer to roughly the same set of hypothetical observations. Here I argue that most inferential statistical tests in psychology fail to meet this basic condition. I demonstrate how foundational assumptions of the “random effects” model used pervasively in psychology impose far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints dramatically inflates false positive rates and routinely leads researchers to draw sweeping verbal generalizations that lack any meaningful connection to the statistical quantities they’re putatively based on. I argue that failure to consider generalizability from a statistical perspective lies at the root of many of psychology’s ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.

I submitted the paper to Behavioral and Brain Sciences, and recently received 6 (!) generally positive reviews. I’m currently in the process of revising the manuscript in response to a lot of helpful feedback (both from the BBS reviewers and a number of other people). In the interim, however, I’ve decided to post a response to one of the reviews that I felt was not helpful, and instead has had the rather unfortunate effect of derailing some of the conversation surrounding my paper.

The review in question is by Daniel Lakens, who, in addition to being one of the BBS reviewers, also posted his review publicly on his blog. While I take issue with the content of Lakens’s review, I’m a fan of open, unfiltered, commentary, so I appreciate Daniel taking the time to share his thoughts, and I’ve done the same here. In the rather long piece that follows, I argue that Lakens’s criticisms of my paper stem from an incoherent philosophy of science, and that once we amend that view to achieve coherence, it becomes very clear that his position doesn’t contradict the argument laid out in my paper in any meaningful way—in fact, if anything, the former is readily seen to depend on the latter.

Lakens makes five main points in his review. My response also has five sections, but I’ve moved some arguments around to give the post a better flow. I’ve divided things up into two main criticisms (mapping roughly onto Lakens’s points 1, 4, and 5), followed by three smaller ones you should probably read only if you’re entertained by petty, small-stakes academic arguments.

Bad philosophy

Lakens’s first and probably most central point can be summarized as a concern with (what he sees as) a lack of philosophical grounding, resulting in some problematic assumptions. Lakens argues that my paper fails to respect a critical distinction between deduction and induction, and consequently runs aground by assuming that scientists (or at least, psychologists) are doing induction when (according to Lakens) they’re doing deduction. He suggests that my core argument—namely, that verbal and statistical hypotheses have to closely align in order to support sensible inference—assumes a scientific project quite different from what most psychologists take themselves to be engaged in.

In particular, Lakens doesn’t think that scientists are really in the business of deriving general statements about the world on the basis of specific observations (i.e., induction). He thinks science is better characterized as a deductive enterprise, where scientists start by positing a particular theory, and then attempt to test the predictions they wring out of that theory. This view, according to Lakens, does not require one to care about statistical arguments of the kind laid out in my paper. He writes:

Yarkoni incorrectly suggests that “upon observing that a particular set of subjects rated a particular set of vignettes as more morally objectionable when primed with a particular set of cleanliness-related words than with a particular set of neutral words, one might draw the extremely broad conclusion that ‘cleanliness reduces the severity of moral judgments'”. This reverses the scientific process as proposed by Popper, which is (as several people have argued, see below) the dominant approach to knowledge generation in psychology. The authors are not concluding that “cleanliness reduces the severity of moral judgments” from their data. This would be induction. Instead, they are positing that “cleanliness reduces the severity of moral judgments”, they collected data and performed and empirical test, and found their hypothesis was corroborated. In other words, the hypothesis came first. It is not derived from the data – the hypothesis is what led them to collect the data.

Lakens’s position is that theoretical hypotheses are not inferred from the data in a bottom-up, post-hoc way—i.e., by generalizing from finite observations to a general regularity—rather, they’re formulated in advance of the data, which is then only used to evaluate the tenability of the theoretical hypothesis. This, in his view, is how we should think about what psychologists are doing—and he credits this supposedly deductivist view to philosophers of science like Popper and Lakatos:

Yarkoni deviates from what is arguably the common approach in psychological science, and suggests induction might actually work: “Eventually, if the effect is shown to hold when systematically varying a large number of other experimental factors, one may even earn the right to summarize the results of a few hundred studies by stating that “cleanliness reduces the severity of moral judgments””. This approach to science flies right in the face of Popper (1959/2002, p. 10), who says: “I never assume that we can argue from the truth of singular statements to the truth of theories. I never assume that by force of ‘verified’ conclusions, theories can be established as ‘true’, or even as merely ‘probable’.”

Similarly, Lakatos (1978, p. 2) writes: “One can today easily demonstrate that there can be no valid derivation of a law of nature from any finite number of facts; but we still keep reading about scientific theories being proved from facts. Why this stubborn resistance to elementary logic?” I am personally on the side of Popper and Lakatos, but regardless of my preferences, Yarkoni needs to provide some argument his inductive approach to science has any possibility of being a success, preferably by embedding his views in some philosophy of science. I would also greatly welcome learning why Popper and Lakatos are wrong. Such an argument, which would overthrow the dominant model of knowledge generation in psychology, could be impactful, although a-priori I doubt it will be very successful.

For reasons that will become clear shortly, I think Lakens’s appeal to Popper and Lakatos here is misguided—those philosophers’ views actually have very little resemblance to the position Lakens stakes out for himself. But let’s start with the distinction Lakens draws between induction and deduction, and the claim that the latter provides an alternative to the former—i.e., that psychologists can avoid making inductive claims if they simply construe what they’re doing as a form of deduction. While this may seem like an intuitive claim at first blush, closer inspection quickly reveals that, far from psychologists having a choice between construing the world in deductive versus inductive terms, they’re actually forced to embrace both forms of reasoning, working in tandem.

There are several ways to demonstrate this, but since Lakens holds deductivism in high esteem, we’ll start out from a strictly deductive position, and then show why our putatively deductive argument eventually requires us to introduce a critical inductive step in order to make any sense out of how contemporary psychology operates.

Let’s start with the following premise:

P1: If theory T is true, we should confirm prediction P

Suppose we want to build a deductively valid argument that starts from the above premise, which seems pretty foundational to hypothesis-testing in psychology. How can we embed P1 into a valid syllogism, so that we can make empirical observations (by testing P) and then updating our belief in theory T? Here’s the most obvious deductively valid way to complete the syllogism:

P1: If theory T is true, we should confirm prediction P
P2: We fail to confirm prediction P
C: Theory T is false

So stated, this modus tollens captures the essence of “naive“ Popperian falsficationism: what scientists do (or ought to do) is attempt to disprove their hypotheses. On this view, if a theory T legitimately entails P, then disconfirming P is sufficient to falsify T. Once that’s done, a scientist can just pack it up and happily move onto the next theory.

Unfortunately, this account, while intuitive and elegant, fails miserably on the reality front. It simply isn’t how scientists actually operate. The problem, as Lakatos famously pointed out, is that the “core“ of a theory T never strictly entails a prediction P by itself. There are invariably other auxiliary assumptions and theories that need to hold true in order for the T → P conditional to apply. For example, observing that people walk more slowly out of a testing room after being primed with old age-related words than with youth-related words doesn’t provide any meaningful support for a theory of social priming unless one is willing to make a large number of auxiliary assumptions—for example, that experimenter knowledge doesn’t inadvertently bias participants; that researcher degrees of freedom have been fully controlled in the analysis; that the stimuli used in the two conditions don’t differ in some irrelevant dimension that can explain the subsequent behavioral change; and so on.

This “sophisticated falsificationism“, as Lakatos dubbed it, is the viewpoint that I gather Lakens thinks most psychologists implicitly subscribe to. And Lakens believes that the deductive nature of the reasoning articulated above is what saves psychologists from having to worry about statistical notions of generalizability.

Unfortunately, this is wrong. To see why, we need only observe that the Popperian and Lakatosian views frame their central deductive argument in terms of falsificationism: researchers can disprove scientific theories by failing to confirm predictions, but—as the Popper statement Lakens approvingly quotes suggests—they can’t affirmatively prove them. This constraint isn’t terribly problematic in heavily quantitative scientific disciplines where theories often generate extremely specific quantitative predictions whose failure would be difficult to reconcile with those theories’ core postulates. For example, Einstein predicted the gravitational redshift of light in 1907 on the basis of his equivalence principle, yet it took nearly 50 years to definitively confirm that prediction via experiment. At the time it was formulated, Einstein’s prediction would have made no sense except in light of the equivalence principle—so the later confirmation of the prediction provided very strong corroboration of the theory (and, by the same token, a failure to experimentally confirm the existence of redshift would have dealt general relativity a very serious blow). Thus, at least in those areas of science where it’s possible to extract extremely “risky“ predictions from one’s theories (more on that later), it seems perfectly reasonable to proceed as if critical experiments can indeed affirmatively corroborate theories—even if such a conclusion isn’t strictly deductively valid.

This, however, is not how almost any psychologists actually operate. As Paul Meehl pointed out in his seminal contrast of standard operating procedures in physics and psychology (Meehl, 1967), psychologists almost never make predictions whose disconfirmation would plausibly invalidate theories. Rather, they typically behave like confirmationists, concluding, on the basis of empirical confirmation of predictions, that their theories are supported (or corroborated). But this latter approach has a logic quite different from the (valid) falsificationist syllogism we saw above. The confirmationist logic that pervades psychology is better represented as follows:

P1: If theory T is true, we should confirm prediction P
P2: We confirm prediction P
C: Theory T is true

C would be a really nice conclusion to draw, if we were entitled to it, because, just as Lakens suggests, we would then have arrived at a way to deduce general theoretical statements from finite observations. Quite a trick indeed. But it doesn’t work; the argument is deductively invalid. If it’s not immediately clear to you why, consider the following argument, which has exactly the same logical structure:

Argument 1
P1: If God loves us all, the sky should be blue
P2: The sky is blue
C: God loves us all

We are not concerned here with the truth of the two premises, but only with the validity of the argument as a whole. And the argument is clearly invalid. Even if we were to assume P1 and P2, C still wouldn’t follow. Observing that the sky is blue (clearly true) doesn’t entail that God loves us all, even if P1 happens to be true, because there could be many other reasons the sky is blue that don’t involve God in any capacity (including, say, differential atmospheric scattering of different wavelengths of light), none of which are precluded by the stated premises.

Now you might want to say, well, sure, but Argument 1 is patently absurd, whereas the arguments Lakens attributes to psychologists are not nearly so silly. But from a strictly deductive standpoint, the typical logic of hypothesis testing in psychology is exactly as silly. Compare the above argument with a running example Lakens (following my paper) uses in his review:

Argument 2
P1: If the theory that cleanliness reduces the severity of moral judgments is true, we should observe condition A > condition B, p < .05
P2: We observe condition A > condition B, p < .05
C: Cleanliness reduces the severity of moral judgments

Subjectively, you probably find this argument much more compelling than the God-makes-the-sky-blue version in Argument 1. But that’s because you’re thinking about the relative plausibility of P1 in the two cases, rather than about the logical structure of the argument. As a purportedly deductive argument, Argument 2 is exactly as bad as Argument 1, and for exactly the same reason: it affirms the consequent. C doesn’t logically follow from P1 and P2, because there could be any number of other potential premises (P3…Pk) that reflect completely different theories yet allow us to derive exactly the same prediction P.

This propensity to pass off deductively nonsensical reasoning as good science is endemic to psychology (and, to be fair, many other sciences). The fact that the confirmation of most empirical predictions in psychology typically provides almost no support for the theories those predictions are meant to test does not seem to deter researchers from behaving as if affirmation of the consequent is a deductively sound move. As Meehl rather colorfully wrote all the way back in 1967:

In this fashion a zealous and clever investigator can slowly wend his way through a tenuous nomological network, performing a long series of related experiments which appear to the uncritical reader as a fine example of “an integrated research program,” without ever once refuting or corroborating so much as a single strand of the network.

Meehl was hardly alone in taking a dim view of the kind of argument we find in Argument 2, and which Lakens defends as a perfectly respectable “deductive“ way to do psychology. Lakatos—the very same Lakatos that Lakens claims he “is on the side of“—was no fan of it either. Lakatos generally had very little to say about psychology, and it seems pretty clear (at least to me) that his views about how science works were rooted primarily in consideration of natural sciences like physics. But on the few occasions that he did venture an opinion about the “soft“ sciences, he made it abundantly clear that he was not a fan. From Lakatos (1970) :

This requirement of continuous growth … hits patched-up, unimaginative series of pedestrian “˜empirical’ adjustments which are so frequent, for instance, in modern social psychology. Such adjustments may, with the help of so-called “˜statistical techniques’, make some “˜novel’ predictions and may even conjure up some irrelevant grains of truth in them. But this theorizing has no unifying idea, no heuristic power, no continuity. They do not add up to a genuine research programme and are, on the whole, worthless1.

If we follow that footnote 1 after “worthless“, we find this:

After reading Meehl (1967) and Lykken (1968) one wonders whether the function of statistical techniques in the social sciences is not primarily to provide a machinery for producing phoney corroborations and thereby a semblance of “scientific progress” where, in fact, there is nothing but an increase in pseudo-intellectual garbage. “¦ It seems to me that most theorizing condemned by Meehl and Lykken may be ad hoc3. Thus the methodology of research programmes might help us in devising laws for stemming this intellectual pollution …

By ad hoc3, Lakatos means that social scientists regularly explain anomalous findings by concocting new post-hoc explanations that may generate novel empirical predictions, but don’t follow in any sensible way from the “positive heuristic“ of a theory (i.e., the set of rules and practices that describe in advance how a researcher ought to interpret and respond to discrepancies). Again, here’s Lakatos:

In fact, I define a research programme as degenerating even if it anticipates novel facts but does so in a patched-up development rather than by a coherent, pre-planned positive heuristic. I distinguish three types of ad hoc auxiliary hypotheses: those which have no excess empirical content over their predecessor (‘ad hoc1’), those which do have such excess content but none of it is corroborated (‘ad hoc2’) and finally those which are not ad hoc in these two senses but do not form an integral part of the positive heuristic (‘ad hoc3’). “¦ Some of the cancerous growth in contemporary social ‘sciences’ consists of a cobweb of such ad hoc3 hypotheses, as shown by Meehl and Lykken.

The above quotes are more or less the extent of what Lakatos had to say about psychology and the social sciences in his published work.

Now, I don’t claim to be able to read the minds of deceased philosophers, but in view of the above, I think it’s safe to say that Lakatos probably wouldn’t have appreciated Lakens claiming to be “on his side“. If Lakens wants to call the kind of view that considers Argument 2 a good way to do empirical science, fine; but I’m going to refer to it as Lakensian deductivism from here on out, because it’s not deductivism in any sense that approximates the normal meaning of the word “deductive“ (I mean, it’s actually deductively invalid!), and I suspect Popper, Lakatos, and Meehl­ might have politely (or maybe not so politely) asked Lakens to cease and desist from implying that they approve of, or share, his views.

Induction to the rescue

So far, things are not looking so good for a strictly deductive approach to psychology. If we follow Lakens in construing deduction and induction as competing philosophical worldviews, and insist on banishing any kind of inductive reasoning from our inferential procedures, then we’re stuck facing up to the fact that virtually all hypothesis testing done by psychologists is actually deductively invalid, because it almost invariably has the logical form captured in Argument 2. I think this is a rather unfortunate outcome, if you happen to be a proponent of a view that you’re trying to convince people merits the label “deduction“.

Fortunately, all is not lost. It turns out that there is a way to turn Argument 2 into a perfectly reasonable basis for doing empirical science of the psychological variety. Unfortunately for Lakens, it runs directly through the kinds of arguments laid out in my paper. To see that, let’s first observe that we can turn the logically invalid Argument 2 into a valid syllogism by slightly changing the wording of P1:

Argument 3
P1: If, and only if, cleanliness reduces the severity of moral judgments, we should find that condition A > condition B, p < .05
P2: We find that condition A > condition B, p < .05
C: Cleanliness reduces the severity of moral judgments

Notice the newly-added words and only if in P1. They makes all the difference! If we know that the prediction P can only be true if theory T is correct, then observing P does in fact allow us to deductively conclude that T is correct. Hooray!

Well, except that this little modification, which looks so lovely on paper, doesn’t survive contact with reality, because in psychology, it’s almost never the case that a given prediction could only have plausibly resulted from one’s favorite theory. Even if you think P1 is true in Argument 2 (i.e., the theory really does make that prediction), it’s clearly false in our updated Argument 3. There are lots of other reasons why we might observe the predicted result, p < .05, even if the theoretical hypothesis is false (i.e., if cleanliness doesn’t reduce the severity of moral judgment). For example, maybe the stimuli in condition A differ on some important but theoretically irrelevant dimension from those in B. Or maybe there are demand characteristics that seep through to the participants despite the investigators’ best efforts. Or maybe the participants interpret the instructions in some unexpected way, leading to strange results. And so on.

Still, we’re on the right track. And we can tighten things up even further by making one last modification: we replace our biconditional P1 above with the following probabilistic version:

Argument 4
P1: It’s unlikely that we would observe A > B, p < .05, unless cleanliness reduces the severity of moral judgments
P2: We observe A > B, p < .05
C1: It’s probably true that cleanliness reduces the severity of moral judgments

Some logicians might quibble with Argument 4, because replacing words like “all“ and “only“ with words like “probably“ and “unlikely“ requires some careful thinking about the relationship between logical and probabilistic inference. But we’ll ignore that here. Whatever modifications you need to make to enable your logic to handle probabilistic statements, I think the above is at least a sensible way for psychologists to proceed when testing hypotheses. If it’s true that the predicted result is unlikely unless the theory is true, and we confirm the prediction, then it seems reasonable to assert (with full recognition that one might be wrong) that the theory is probably true.

But now the other shoe drops. Because even if we accept that Argument 4 is (for at least some logical frameworks) valid, we still need to show that it’s sound. And soundness requires the updated P1 to be true. If P1 isn’t true, then the whole enterprise falls apart again; nobody is terribly interested in scientific arguments that are logically valid but empirically false. We saw that P1 in Argument 2 was uncontroversial, but was embedded in a logically invalid argument. And conversely, P1 in Argument 3 was embedded in a logically valid argument, but was clearly indefensible. Now we’re suggesting that P1 in Argument 4, which sits somewhere in between Argument 2 and Argument 3, manages to capture the strengths of both of the previous arguments, while avoiding their weaknesses. But we can’t just assert this by fiat; it needs to be demonstrated somehow. So how do we do that?

The banal answer is that, at this point, we have to start thinking about the meanings of the words contained in P1, and not just about the logical form of the entire argument. Basically, we need to ask ourselves: is it really true that all other explanations for the predicted statistical result, are, in the aggregate, unlikely?

Notice that, whether we like it or not, we are now compelled to think about the meaning of the statistical prediction itself. To evaluate the claim that the result A > B (p < .05) would be unlikely unless the theoretical hypothesis is true, we need to understand the statistical model that generated the p-values in question. And that, in turn, forces us to reason inductively, because inferential statistics is, by definition, about induction. The point of deploying inferential statistics, rather than constraining one’s self to only describing the sampled measurements, is to generalize beyond the observed sample to a broader population. If you want to know whether the predicted p-value follows from your theory, you need to know whether the population your verbal hypothesis applies to is well approximated by the population your statistical model affords generalization to. If it isn’t, then there’s no basis for positing a premise like P1.

Once we’ve accepted this much—and to be perfectly blunt about it, if you don’t accept this much, you probably shouldn’t be using inferential statistics in the first place—then we have no choice but to think carefully about the alignment between our verbal and statistical hypotheses. Is P1 in Argument 4 true? Is it really the case that observing A > B, p < .05, would be unlikely unless cleanliness reduces the severity of moral judgments? Well that depends. What population of hypothetical observations does the model that generates the p-value refer to? Does it align with the population implied by the verbal hypothesis?

This is the critical question one must answer, and there’s no way around it. One cannot claim, as Lakens tries to, that psychologists don’t need to worry about inductive inference, because they’re actually doing deduction. Induction and deduction are not in opposition here; they’re actually working in tandem! Even if you agree with Lakens and think that the overarching logic guiding psychological hypothesis testing is of the deductive form expressed in Argument 4 (as opposed to the logically invalid form in Argument 2, as Meehl suggested), you still can’t avoid the embedded inductive step captured by P1, unless you want to give up the use of inferential statistics entirely.

The bottom line is that Lakens—and anyone else who finds the flavor of so-called deductivism he advocates appealing—faces a dilemma on two horns. One way to deal with the fact that Lakensian deductivism is in fact deductively invalid is to lean into it and assert that, logic notwithstanding, this is just how psychologists operate, and the important thing is not whether or not the logic makes deductive sense if you scrutinize it closely, but whether it allows people to get on with their research in a way they’re satisfied with.

The upside of such a position is that it allows you to forever deflect just about any criticism of what you’re doing simply by saying “well, the theory seems to me to follow from the prediction I made“. The downside—and it’s a big one, in my opinion—is that science becomes a kind of rhetorical game, because at that point there’s pretty much nothing anybody else can say to disabuse you of the belief that you’ve confirmed your theory. The only thing that’s required is that the prediction make sense to you (or, if you prefer, to you plus two or three reviewers). A secondary consequence is that it also becomes impossible to distinguish the kind of allegedly scientific activity psychologists engage in from, say, postmodern scholarship, so a rather unwelcome conclusion of taking Lakens’s view seriously is that we may as well extend the label science to the kind of thing that goes on in journals like Social Text. Maybe Lakens is okay with this, but I very much doubt that this is the kind of worldview most psychologists want to commit themselves to.

The more sensible alternative is to accept that the words and statistics we use do actually need to make contact with a common understanding of reality if we’re to be able to make progress. This means that when we say things like “it’s unlikely that we would observe a statistically significant effect here unless our theory is true“, evaluation of such a statement requires that one be able to explain, and defend, the relationship between the verbal claims and the statistical quantities on which the empirical support is allegedly founded.

The latter, rather weak, assumption—essentially, that scientists should be able to justify the premises that underlie their conclusions—is all my paper depends on. Once you make that assumption, nothing more depends on your philosophy of science. You could be a Popperian, a Lakatosian, an inductivist, a Lakensian, or an anarchist… It really doesn’t matter, because, unless you want to embrace the collapse of science into postmodernism, there’s no viable philosophy of science under which scientists get to use words and statistics in whatever way they like, without having to worry about the connection between them. If you expect to be taken seriously as a scientist who uses inferential statistics to draw conclusions from empirical data, you’re committed to caring about the relationship between the statistical models that generate your p-values and the verbal hypotheses you claim to be testing. If you find that too difficult or unpleasant, that’s fine (I often do too!); you can just drop the statistics from your arguments, and then it’s at least clear to people that your argument is purely qualitative, and shouldn’t be accorded the kind of reception we normally reserve (fairly or unfairly) for quantitative science. But you don’t get to claim the prestige and precision that quantitation seems to confer on researchers while doing none of the associated work. And you certainly can’t avoid doing that work simply by insisting that you’re doing a weird, logically fallacious, kind of “deduction“.

Unfair to severity

Lakens’s second major criticism is that I’m too hard on the notion of severity. He argues that I don’t give the Popper/Meehl/Mayo risky prediction/severe testing school of thought sufficient credit, and that it provides a viable alternative to the kind of position he takes me to be arguing for. Lakens makes two main points, which I’ll dub Severity I and Severity II.

Severity I

First, Lakens argues that my dismissal of risky or severe tests as a viable approach in most of psychology is unwarranted. I’ll quote him at length here, because the core of his argument is embedded in some other stuff, and I don’t want to be accused of quoting out of context (note that I did excise one part of the quote, because I deal with it separately below):

Yarkoni’s criticism on the possibility of severe tests is regrettably weak. Yarkoni says that “Unfortunately, in most domains of psychology, there are pervasive and typically very plausible competing explanations for almost every finding.” From his references (Cohen, Lykken, Meehl) we can see he refers to the crud factor, or the idea that the null hypothesis is always false. As we recently pointed out in a review paper on crud (Orben & Lakens, 2019), Meehl and Lykken disagreed about the definition of the crud factor, the evidence of crud in some datasets can not be generalized to all studies in pychology, and “The lack of conceptual debate and empirical research about the crud factor has been noted by critics who disagree with how some scientists treat the crud factor as an “axiom that needs no testing” (Mulaik, Raju, & Harshman, 1997).”. Altogether, I am very unconvinced by this cursory reference to crud makes a convincing point that “there are pervasive and typically very plausible competing explanations for almost every finding”. Risky predictions seem possible, to me, and demonstrating the generalizability of findings is actually one way to perform a severe test.

When Yarkoni discusses risky predictions, he sticks to risky quantitative predictions. As explained in Lakens (2020), “Making very narrow range predictions is a way to make it statistically likely to falsify your prediction if it is wrong. But the severity of a test is determined by all characteristics of a study that increases the capability of a prediction to be wrong, if it is wrong. For example, by predicting you will only observe a statistically significant difference from zero in a hypothesis test if a very specific set of experimental conditions is met that all follow from a single theory, it is possible to make theoretically risky predictions.” “¦ It is unclear to me why Yarkoni does not think that approaches such as triangulation (Munafò & Smith, 2018) are severe tests. I think these approaches are the driving force between many of the more successful theories in social psychology (e.g., social identity theory), and it works fine.

There are several relatively superficial claims Lakens makes in these paragraphs that are either wrong or irrelevant. I’ll take them up below, but let me first address the central claim, which is that, contrary to the argument I make in my paper, risky prediction in the Popper/Meehl/Mayo sense is actually a viable strategy in psychology.

It’s instructive to note that Lakens doesn’t actually provide any support for this assertion; his argument is entirely negative. That is, he argues that I haven’t shown severity to be impossible. This is a puzzling way to proceed, because the most obvious way to refute an argument of the form “it’s almost impossible to do X“ is to just point to a few garden variety examples where people have, in fact, successfully done X. Yet at no point in Lakens’s lengthy review does he provide any actual examples of severe tests in psychology—i.e., of cases where the observed result would be extremely implausible if the favored theory were false. This omission is hard to square with his insistence that severe testing is a perfectly sensible approach that many psychologists already use successfully. Hundreds of thousands of papers have been published in psychology over the past century; if an advocate of a particular methodological approach can’t identify even a tiny fraction of the literature that has successfully applied that approach, how seriously should that view be taken by other people?

As background, I should note that Lakens’s inability to give concrete examples of severe testing isn’t peculiar to his review of my paper; in various interactions we’ve had over the last few years, I’ve repeatedly asked him to provide such examples. He’s obliged exactly once, suggesting this paper, titled Ego Depletion Is Not Just Fatigue: Evidence From a Total Sleep Deprivation Experiment by Vohs and colleagues.

In the sole experiment Vohs et al. report, they purport to test the hypothesis that ego depletion is not just fatigue (one might reasonably question whether there’s any non-vacuous content to this hypothesis to begin with, but that’s a separate issue). They proceed by directing participants who either have or have not been deprived of sleep to suppress their emotions while viewing disgusting video clips. In a subsequent game, they then ask the same participants to decide (seemingly incidentally) how loud a noise to blast an opponent with—a putative measure of aggression. The results show that participants who suppressed emotion selected louder volumes than those who did not, whereas the sleep deprivation manipulation had no effect.

I leave it as an exercise to the reader to decide for themselves whether the above example is a severe test of the theoretical hypothesis. To my mind, at least, it clearly isn’t; it fits very comfortably into the category of things that Meehl and Lakatos had in mind when discussing the near-total disconnect between verbal theories and purported statistical evidence. There are dozens, if not hundreds, of ways one might obtain the predicted result even if the theoretical hypothesis Vos et al. articulate were utterly false (starting from the trivial observation that one could obtain the pattern the authors reported even if the two manipulations tapped exactly the same construct but were measured with different amounts of error). There is nothing severe about the test, and to treat it as such is to realize Meehl and Lakatos’s worst fears about the quality of hypothesis-testing in much of psychology.

To be clear, I did not suggest in my paper (nor am I here) that severe tests are impossible to construct in psychology. I simply observed that they’re not a realistic goal in most domains, particularly in “soft“ areas (e.g., social psychology). I think I make it abundantly clear in the paper that I don’t see this as a failing of psychologists, or of their favored philosophy of science; rather, it’s intrinsic to the domain itself. If you choose to study extremely complex phenomena, where any given behavior is liable to be a product of an enormous variety of causal factors interacting in complicated ways, you probably shouldn’t expect to be able to formulate clear law-like predictions capable of unambiguously elevating one explanation above others. Social psychology is not physics, and there’s no reason to think that methodological approaches that work well when one is studying electrons and quarks should also work well when one is studying ego depletion and cognitive dissonance.

As for the problematic minor claims in the paragraphs I quoted above (you can skip down to the “Severity II“ section you’re bored or short on time)… First, the citations to Cohen, Lykken, and Meehl contain well-developed arguments to the same effect as my claim that “there are pervasive and typically very plausible competing explanations for almost every finding“. These arguments do not depend on what one means by “crud“, which is the subject of Orben & Lakens (2019). The only point relevant to my argument is that outcomes in psychology are overwhelmingly determined by many factors, so that it’s rare for a hypothesized effect in psychology to have no plausible explanation other than the authors’ preferred theoretical hypothesis. I think this is self-evidently true, and needs no further justification. But if you think it does require justification, I invite you to convince yourself of it in the following easy steps: (1) Write down 10 or 20 random effects that you feel are a reasonably representative sample of your field. (2) For each one, spend 5 minutes trying to identify alternative explanations for the predicted result that would be plausible even if the researcher’s theoretical hypothesis were false. (3) Observe that you were able to identify plausible confounds for all of the effects you wrote down. There, that was easy, right?

Second, it isn’t true that I stick to risky quantitative predictions. I explicitly note that risky predictions can be non-quantitative:

The canonical way to accomplish this is to derive from one’s theory some series of predictions—typically, but not necessarily, quantitative in nature—sufficiently specific to that theory that they are inconsistent with, or at least extremely implausible under, other accounts.

I go on to describe several potential non-quantitative approaches (I even cite Lakens!):

This does not mean, however, that vague directional predictions are the best we can expect from psychologists. There are a number of strategies that researchers in such fields could adopt that would still represent at least a modest improvement over the status quo (for discussion, see Meehl, 1990). For example, researchers could use equivalence tests (Lakens, 2017); predict specific orderings of discrete observations; test against compound nulls that require the conjunctive rejection of many independent directional predictions; and develop formal mathematical models that posit non-trivial functional forms between the input and ouput (Marewski & Olsson, 2009; Smaldino, 2017).

Third, what Lakens refers to as “triangulation“ is, as far as I can tell, conceptually akin to a logical conjunction of effects suggested above, so again, it’s unfair to say that I oppose this idea. I support it—in principle. However, two points are worth noting. First, the practical barrier to treating conjunctive rejections as severe tests is that it requires researchers to actually hold their own feet to the fire by committing ahead of time to the specific conjunction that they deem a severe test. It’s not good enough to state ahead of time that the theory makes 6 predictions, and then, when results reveal that the theory only confirms 4 of the predictions, to generate some post-hoc explanation for the 2 failed predictions while still claiming that the theory managed to survive a critical test.

Second, as we’ve already seen, the mere fact that a researcher believes a test is severe does not actually make it so, and there are good reasons to worry that many researchers grossly underestimate the degree of actual support a particular statistical procedure (or conjunction of procedures) actually confers on a theory. For example, you might naively suppose that if your theory makes 6 independent directional predictions—implying a probability of 2^6, or 1.5%, of getting all 6 right purely by chance—then joint corroboration of all your predictions provides strong support for your theory. But this isn’t generally the case, because many plausible competing accounts in psychology will tend to generate similarly-signed predictions. As a trivial example, when demand characteristics are present, they will typically tend to push in the direction of the researcher’s favored hypotheses.

The bottom line is that, while triangulation is a perfectly sensible strategy in principle, deploying it in a way that legitimately produces severe tests of psychological theories does not seem any easier than the other approaches I mention—nor, again, does Lakens seem able to provide any concrete examples.

Severity II

Lakens’s second argument regarding severity (or my alleged lack of respect for it) is that I put the cart before the horse: whereas I focus largely on the generalizability of claims made on the basis of statistical evidence, Lakens argues that generalizability is purely an instrumental goal, and that the overarching objective is severity. He writes:

I think the reason most psychologists perform studies that demonstrate the generalizability of their findings has nothing to do with their desire to inductively build a theory from all these single observations. They show the findings generalize, because it increases the severity of their tests. In other words, according to this deductive approach, generalizability is not a goal in itself, but a it follows from the goal to perform severe tests.

And:

Generalization as a means to severely test a prediction is common, and one of the goals of direct replications (generalizing to new samples) and conceptual replications (generalizing to different procedures). Yarkoni might disagree with me that generalization serves severity, not vice versa. But then what is missing from the paper is a solid argument why people would want to generalize to begin with, assuming at least a decent number of them do not believe in induction. The inherent conflict between the deductive approaches and induction is also not explained in a satisfactory manner.

As a purported criticism of my paper, I find this an unusual line of argument, because not only does it not contradict anything I say in my paper, it actually directly affirms it. In effect, Lakens is saying yes, of course it matters whether the statistical model you use maps onto your verbal hypothesis; how else would you be able to formulate a severe test of the hypothesis using inferential statistics? Well, I agree with him! My only objection is that he doesn’t follows his own argument far enough. He writes that “generalization as a means to severely test a prediction is common“, but he’s being too modest. It isn’t just common; for studies that use inferential statistics, it’s universal. If you claim to be using statistical results to test your theoretical hypotheses, you’re obligated to care about the alignment between the universes of observations respectively defined by your verbal and statistical hypotheses. As I’ve pointed out at length above, this isn’t a matter of philosophical disagreement (i.e., of some imaginary “inherent conflict between the deductive approaches and induction“); it’s definitional. Inferential statistics is about generalizing from samples to populations. How could you possibly assert that a statistical test of a hypothesis is severe if you have no idea whether the population defined by your statistical model aligns with the one defined by your verbal hypothesis? Can Lakens provide an example of a severe statistical test that doesn’t require one to think about what population of observations a model applies to? I very much doubt it.

For what it’s worth, I don’t think the severity of hypothesis testing is the only reason to worry about the generalizability of one’s statistical results. We can see this trivially, inasmuch as severity only makes sense in a hypothesis testing context, whereas generalizability matters any time inferential statistics (which make reference to some idealized population) are invoked. If you report a p-value from a linear regression model, I don’t need to know what hypothesis motivated the analysis in order to interpret the results, but I do need to understand what universe of hypothetical observations the statistical model you specified refers to. If Lakens wants to argue that statistical results are uninterpretable unless they’re presented as confirmatory tests of an a priori hypothesis, that’s his prerogative (though I doubt he’ll find many takers for that view). At the very least, though, it should be clear that his own reasoning gives one more, and not less, reason to take the arguments in my paper seriously.

Hopelessly impractical

[Attention conservation notice: the above two criticisms are the big ones; you can safely stop reading here without missing much. The stuff below is frankly more a reflection of my irritation at some of Lakens’s rhetorical flourishes than about core conceptual issues.]

A third theme that shows up repeatedly in Lakens’s review is the idea that the arguments I make, while perhaps reasonable from a technical standpoint, are far too onerous to expect real researchers to implement. There are two main strands of argument here. Both of them, in my view, are quite wrong. But one of them is wrong and benign, whereas the other is wrong and possibly malignant.

Impractical I

The first (benign) strand is summarized by Lakens’s Point 3, which he titles theories and tests are not perfectly aligned in deductive approaches. As we’ll see momentarily, “perfectly“ is a bit of a weasel word that’s doing a lot of work for Lakens here. But his general argument is that you only need to care about the alignment between statistical and verbal specifications of a hypothesis if you’re an inductivist:

To generalize from a single observation to a general theory through induction, the sample and the test should represent the general theory. This is why Yarkoni is arguing that there has to be a direct correspondence between the theoretical model, and the statistical test. This is true in induction.

I’ve already spent several thousand words above explaining why this is simply false. To recap (I know I keep repeating myself, but this really is the crux of the whole issue): if you’re going to report inferential statistics and claim that they provide support for your verbal hypotheses, then you’re obligated to care about the correspondence between the test and the theory. This doesn’t require some overarching inductivist philosophy of science (which is fortunate, because I don’t hold one myself); it only requires you to believe that when you make statements of the form “statistic X provides evidence for verbal claim Y“, you should be able to explain why that’s true. If you can’t explain why the p-value (or Bayes Factor, etc.) from that particular statistical specification supports your verbal hypothesis, but a different specification that produces a radically different p-value wouldn’t, it’s not clear why anybody else should take your claims seriously. After all, inferential statistics aren’t (or at least, shouldn’t be) just a kind of arbitrary numerical magic we sprinkle on top of our words to get people to respect us. They mean things. So the alternative to caring about the relationship between inferential statistics and verbal claims is not, as Lakens seems to think, deductivism—it’s ritualism.

The tacit recognition of this point is presumably why Lakens is careful to write that “theories and tests are not perfectly aligned in deductive approaches“ (my emphasis). If he hadn’t included the word “perfectly“, the claim would seem patently silly, since theories and tests obviously need to be aligned to some degree no matter what philosophical view one adopts (save perhaps for outright postmodernism). Lakens’s argument here only makes any sense if the reader can be persuaded that my view, unlike Lakens’, demands perfection. But it doesn’t (more on that below).

Lakens then goes on to address one of the central planks of my argument, namely, the distinction between fixed and random factors (which typically has massive implications for the p-values one observes). He suggests that while the distinction is real, it’s wildly unrealistic to expect anybody to actually be able to respect it:

If I want to generalize beyond my direct observations, which are rarely sampled randomly from all possible factors that might impact my estimate, I need to account for uncertainty in the things I have not observed. As Yarkoni clearly explains, one does this by adding random factors to a model. He writes (p. 7) “Each additional random factor one adds to a model licenses generalization over a corresponding population of potential measurements, expanding the scope of inference beyond only those measurements that were actually obtained. However, adding random factors to one’s model also typically increases the uncertainty with which the fixed effects of interest are estimated”. You don’t need to read Popper to see the problem here – if you want to generalize to all possible random factors, there are so many of them, you will never be able to overcome the uncertainty and learn anything. This is why inductive approaches to science have largely been abandoned.

You don’t need to read Paul Meehl’s Big Book of Logical Fallacies to see that Lakens is equivocating. He equates wanting to generalize beyond one’s sample with wanting to generalize “to all possible random factors“—as if the only two possible interpretations of an effect are that it either generalizes to all conceivable scenarios, or that it can’t be generalized beyond the sample at all. But this just isn’t true; saying that researchers should build statistical models that reflect their generalization intentions is not the same as saying that every mixed-effects model needs to include all variance components that could conceivably have any influence, however tiny, on the measured outcomes. Lakens presents my argument as a statistically pedantic, technically-correct-but-hopelessly-ineffectual kind of view—at which point it’s supposed to become clear to the reader that it’s just crazy to expect psychologists to proceed in the way I recommend. And I agree that it would be crazy—if that was actually what I was arguing. But it isn’t. I make it abundantly clear in my paper that aligning verbal and statistical hypotheses needn’t entail massive expansion of the latter; it can also (and indeed, much more feasibly) entail contraction of the former. There’s an entire section in the paper titled Draw more conservative inferences that begins with this:

Perhaps the most obvious solution to the generalizability problem is for authors to draw much more conservative inferences in their manuscripts—and in particular, to replace the hasty generalizations pervasive in contemporary psychology with slower, more cautious conclusions that hew much more closely to the available data. Concretely, researchers should avoid extrapolating beyond the universe of observations implied by their experimental designs and statistical models. Potentially relevant design factors that are impractical to measure or manipulate, but that conceptual considerations suggest are likely to have non-trivial effects (e.g., effects of stimuli, experimenter, research site, culture, etc.), should be identified and disclosed to the best of authors’ ability.

Contra Lakens, this is hardly an impractical suggestion; if anything, it offers to reduce many authors’ workload, because Introduction and Discussion sections are typically full of theoretical speculations that go well beyond the actual support of the statistical results. My prescription, if taken seriously, would probably shorten the lengths of a good many psychology papers. That seems pretty practical to me.

Moreover—and again contrary to Lakens’s claim—following my prescription would also dramatically reduce uncertainty rather than increasing it. Uncertainty arises when one lacks data to inform one’s claims or beliefs. If maximal certainty is what researchers want, there are few better ways to achieve that than to make sure their verbal claims cleave as closely as possible to the boundaries implicitly defined by their experimental procedures and statistical models, and hence depend on fewer unmodeled (and possibly unknown) variables.

Impractical II

The other half of Lakens’s objection from impracticality is to suggest that, even if the arguments I lay out have some merit from a principled standpoint, they’re of little practical use to most researchers, because I don’t do enough work to show readers how they can actually use those principles in their own research. Lakens writes:

The issues about including random factors is discussed in a more complete, and importantly, applicable, manner in Barr et al (2013). Yarkoni remains vague on which random factors should be included and which not, and just recommends ‘more expansive’ models. I have no idea when this is done satisfactory. This is a problem with extreme arguments like the one Yarkoni puts forward. It is fine in theory to argue your test should align with whatever you want to generalize to, but in practice, it is impossible. And in the end, statistics is just a reasonably limited toolset that tries to steer people somewhat in the right direction. The discussion in Barr et al (2013), which includes trade-offs between converging models (which Yarkoni too easily dismisses as solved by modern computational power – it is not solved) and including all possible factors, and interactions between all possible factors, is a bit more pragmatic.“

And:

As always, it is easy to argue for extremes in theory, but this is generally uninteresting for an applied researcher. It would be great if Yarkoni could provide something a bit more pragmatic about what to do in practice than his current recommendation about fitting “more expansive models” – and provides some indication where to stop, or at least suggestions what an empirical research program would look like that tells us where to stop, and why.

And:

Previous authors have made many of the same points, but in a more pragmatic manner (e.g., Barr et al., 2013m Clark, 1974,). Yarkoni fails to provide any insights into where the balance between generalizing to everything, and generalizing to factors that matter, should lie, nor does he provide an evaluation of how far off this balance research areas are. It is easy to argue any specific approach to science will not work in theory – but it is much more difficult to convincingly argue it does not work in practice.

There are many statements in Lakens’s review that made me shake my head, but the argument advanced in the above quotes is the only one that filled me (briefly) with rage. In part that’s because parts of what Lakens says here blatantly misrepresent my paper. For example, he writes that “Yarkoni just recommends “˜more expansive models’“, which is frankly a bit insulting given that I spend a full third of my paper talking about various ways to address the problem (e.g., by designing studies that manipulate many factors at once; by conducting meta-analyses over variance components; etc.).

Similarly, Lakens implies that Barr et al. (2013) gives better versions of my arguments, when actually the two papers are doing completely different things. Barr et al. (2013) is a fantastic paper, but it focuses almost entirely on the question of how one should specify and estimate mixed-effects models, and says essentially nothing about why researchers should think more carefully about random factors, or which ones researchers ought to include in their model. One way to think about it is that Barr et al. (2013) is the paper you should read after my paper has convinced you that it actually matters a lot how you specify your random-effects structure. Of course, if you’re already convinced of the latter (which many people are, though Lakens himself doesn’t seem to be), then yeah, you should maybe skip my paper““you’re not the intended audience.

In any case, the primary reason I found this part of Lakens’s review upsetting is that the above quotes capture a very damaging, but unfortunately also very common, sentiment in psychology, which is the apparent belief that somebody—and perhaps even nature itself—owes researchers easy solutions to extremely complex problems.

Lakens writes that “Yarkoni remains vague on which random factors should be included and which not“, and that “ It would be great if Yarkoni could provide something a bit more pragmatic about what to do in practice than his current recommendation about fitting “more expansive models”. Well, on a superficial level, I agree with Lakens: I do remain vague on which factors should be included, and it would be lovely if I were able to say something like “here, Daniel, I’ve helpfully identified for you the five variance components that you need to care about in all your studies“. But I can’t say something like that, because it would be a lie. There isn’t any such one-size-fits-all prescription—and trying to pretend there is would, in my view, be deeply counterproductive. Psychology is an enormous field full of people trying to study a very wide range of complex phenomena. There is no good reason to suppose that the same sources of variance will assume even approximately the same degree of importance across broad domains, let alone individual research questions. Should psychophysicists studying low-level visual perception worry about the role of stimulus, experimenter, or site effects? What about developmental psychologists studying language acquisition? Or social psychologists studying cognitive dissonance? I simply don’t know.

One reason I don’t know, as I explain in my paper, is that the answer depends heavily on what conclusions one intends to draw from one’s analyses—i.e., on one’s generalization intentions. I hope Lakens would agree with me that it’s not my place to tell other people what their goal should be in doing their research. Whether or not a researcher needs to model stimuli, sites, tasks, etc. as random factors depends on what claim they intend to make. If a researcher intends to behave as if their results apply to a population of stimuli like the ones one used in their study, and not just to the exact sampled stimuli, then they should use a statistical model that reflects that intention. But if they don’t care to make that generalization, and are comfortable drawing no conclusions beyond the confines of the tested stimuli, then maybe they don’t need to worry about explicitly modeling stimulus effects at all. Either way, what determines whether or not a statistical model is or isn’t appropriate is whether or not that model adequately captures what a researcher claims it’s capturing—not whether Tal Yarkoni has data suggesting that, on average, site effects are large in one area of social psychology but not large in another area of psychophysics.

The other reason I can’t provide concrete guidance about what factors psychologists ought to model as random is that attempting to establish even very rough generalizations of this sort would involve an enormous amount of work—and the utility of that work would be quite unclear, given how contextually specific the answers are likely to be. Lakens himself seems to recognize this; at one point in his review, he suggests that the topic I address “probably needs a book length treatment to do it justice.“ Well, that’s great, but what are working researchers supposed to do in the meantime? Is the implication that psychologists should feel free to include whatever random effects they do or don’t feel like in their models until such time as someone shows up with a compendium of variance component estimates that apply to different areas of psychology? Does Lakens also dismiss papers seeking to convince people that it’s important to consider statistical power when designing studies, unless those papers also happen to provide ready-baked recommendations for what an appropriate sample size is for different research areas within psychology? Would he also conclude that there’s no point in encouraging researchers to define “smallest effect sizes of interest“, as he himself has done in the past, unless one can provide concrete recommendations for what those numbers should be?

I hope not. Such a position would amount to shooting the messenger. The argument in my paper is that model specification matters, and that researchers need to think about that carefully. I think I make that argument reasonably clearly and carefully. Beyond that, I don’t think it’s my responsibility to spend the next N years of my own life trying to determine what factors matter most in social, developmental, or cognitive psychology, just so that researchers in those fields can say, “thanks, your crummy domain-general estimates are going to save me from having to think deeply about what influences matter in my own particular research domain“. I think it’s every individual researcher’s job to think that through for themselves, if they expect to be taken seriously.

Lastly, and at the risk of being a bit petty (sorry), I can’t resist pointing out what strikes me as a rather serious internal contradiction between Lakens’s claim that my arguments are unhelpful unless they come with pre-baked variance estimates, and his own stated views about severity. On the one hand, Lakens claims that psychologists ought to proceed by designing studies that subject their theoretical hypotheses to severe tests. On the other hand, he seems to have no problem with researchers mindlessly following field-wide norms when specifying their statistical models (e.g., modeling only subjects as random effects, because those are the current norms). I find these two strands of thought difficult to reconcile. As we’ve already seen, the severity of a statistical procedure as a test of a theoretical hypothesis depends on the relationship between the verbal hypothesis and the corresponding statistical specification. How, then, could a researcher possibly feel confident that their statistical procedure constitutes a severe test of their theoretical hypothesis, if they’re using an off-the-shelf model specification and have no idea whether they would have obtained radically different results if they had randomly sampled a different set of stimuli, participants, experimenters, or task operationalizations?

Obviously, it can’t. Having to think carefully about what the terms in one’s statistical model mean, how they relate to one’s theoretical hypothesis, and whether those assumptions are defensible, isn’t at all “impractical“; it’s necessary. If you can’t explain clearly why a model specification that includes only subjects as random effects constitutes a severe test of your hypothesis, why would you expect other people to take your conclusions at face value?

Trouble with titles

There’s one last criticism Lakens raises in his review of my paper. It concerns claims I make about the titles of psychology papers:

This is a minor point, but I think a good illustration of the weakness of some of the main arguments that are made in the paper. On the second page, Yarkoni argues that “the vast majority of psychological scientists have long operated under a regime of (extremely) fast generalization”. I don’t know about the vast majority of scientists, but Yarkoni himself is definitely using fast generalization. He looked through a single journal, and found 3 titles that made general statements (e.g., “Inspiration Encourages Belief in God”). When I downloaded and read this article, I noticed the discussion contains a ‘constraint on generalizability’ in the discussion, following (Simons et al., 2017). The authors wrote: “We identify two possible constraints on generality. First, we tested our ideas only in American and Korean samples. Second, we found that inspiring events that encourage feelings of personal insignificance may undermine these effects.”. Is Yarkoni not happy with these two sentence clearly limiting the generalizability in the discussion?

I was initially going to respond to this in detail, but ultimately decided against it, because (a) by Lakens’ own admission, it’s a minor concern; (b) this is already very long as-is; and (c) while it’s a minor point in the context of my paper, I think this issue has some interesting and much more general implications for how we think about titles. So I’ve decided I won’t address it here, but will eventually take it up in a separate piece that gives it a more general treatment, and that includes a kind of litmus test one can use to draw reasonable conclusions about whether or not a title is appropriate. But, for what it’s worth, I did do a sweep through the paper in the process of revision, and have moderated some of the language.

Conclusion

Daniel Lakens argues that psychologists don’t need to care much if at all about the relationship between their statistical model specifications and their verbal hypotheses, because hypothesis testing in psychology proceeds deductively: researchers generate predictions from their theories, and then update their confidence in their theories on the basis of whether or not those predictions are confirmed. This all sounds great until you realize that those predictions are almost invariably evaluated using inferential statistical methods that are inductive by definition. So long as psychologists are relying on inferential statistics as decision aids, there can be no escape from induction. Deduction and induction are not competing philosophies or approaches; the standard operating procedure in psychology is essentially a hybrid of the two.

If you don’t like the idea that the ability to appraise a verbal hypothesis using statistics depends critically on the ability to understand and articulate how the statistical terms map onto the verbal ideas, that’s fine; an easy way to solve that problem is to just not use inferential statistics. That’s a perfectly reasonable position, in my view (and one I discuss at length in my paper). But once you commit yourself to relying on things like p-values and Bayes Factors to help you decide what you believe about the world, you’re obligated to think about, justify, and defend your statistical assumptions. They aren’t, or shouldn’t be, just a kind of pedantic technical magic you can push-button sprinkle on top of your favorite verbal hypotheses to make them really stick.

The parable of the three districts: A projective test for psychologists

A political candidate running for regional public office asked a famous political psychologist what kind of television ads she should air in three heavily contested districts: positive ones emphasizing her own record, or negative ones attacking her opponent’s record.

“You’re in luck,“ said the psychologist. “I have a new theory of persuasion that addresses exactly this question. I just published a paper containing four large studies that all strongly support the theory and show that participants are on average more persuaded by attack ads than by positive ones.“

Convinced by the psychologist’s arguments and his confident demeanor, the candidate’s campaign ran carefully tailored attack ads in all three districts. She proceeded to lose the race by a landslide, with exit surveys placing much of the blame on the negative tone of her ads.

As part of the campaign post-mortem, the candidate asked the psychologist what he thought had gone wrong.

“Oh, different things,“ said the psychologist. “In hindsight, the first district was probably too educated; I could see how attack ads might turn off highly educated voters. In the second district““and I’m not going to tiptoe around the issue here—I think the problem was sexism. You have a lot of low-SES working-class men in that district who probably didn’t respond well to a female candidate publicly criticizing a male opponent. And in the third district, I think the ads you aired were just too over the top. You want to highlight your opponent’s flaws subtly, not make him sound like a cartoon villain.“

“That all sounds reasonable enough,“ said the candidate. “But I’m a bit perplexed that you didn’t mention any of these subtleties ahead of time, when they might have been more helpful.“

“Well,“ said the psychologist. “That would have been very hard to do. The theory is true in general, you see. But every situation is different.“

Big Data, n. A kind of black magic

The annual Association for Psychological Science meeting is coming up in San Francisco this week. One of the cross-cutting themes this year is “Big Data: Understanding Patterns of Human Behavior”. Since I’m giving two Big Data-related talks (1, 2), and serving as discussant on a related symposium, I’ve been spending some time recently trying to come up with a sensible definition of Big Data within the context of psychological science. This has, in turn, led me to ponder the meaning of Big Data more generally.

After a few sleepless nights mulling it over for a while, I’ve concluded that producing a unitary, comprehensive, domain-general definition of Big Data is probably not possible, for the simple reason that different communities have adopted and co-opted the term for decidedly different purposes. For example, in said field of psychology, the very largest datasets that most researchers currently work with contain, at most, tens of thousands of cases and a few hundred variables (there are exceptions, of course). Such datasets fit comfortably into memory on any modern laptop; you’d have a hard time finding (m)any data scientists willing to call a dataset of this scale “Big”. Yet here we are, heading into APS, with multiple sessions focusing on the role of Big Data in psychological science. And psychology’s not unusual in this respect; we’re seeing similar calls for Big Data this and Big Data that in pretty much all branches of science and every area of the business world. I mean, even the humanities are getting in on the action.

You could take a cynical view of this and argue that all this really goes to show is that people like buzzwords. And there’s probably some truth to that. More pragmatically, though, we should acknowledge that language is this flexible kind of thing that likes to reshape itself from time to time. Words don’t have any intrinsic meaning above and beyond what we do with them, and it’s certainly not like anyone has a monopoly on a term that only really exploded into the lexicon circa 2011. So instead of trying to come up with a single, all-inclusive definition of Big Data, I’ve instead opted to try and make sense of the different usages we’re seeing in different communities. Below I suggest three distinct, but overlapping, definitions–corresponding to three different ways of thinking about what makes data “Big”. They are, roughly, (1) the kind of infrastructure required to support data processing, (2) the size of the dataset relative to the norm in a field, and (3) the complexity of the models required to make sense out of the data. To a first approximation, one can think of these as engineering, scientific, and statistical perspectives on Big Data, respectively.

The engineering perspective

One way to define Big Data is in terms of the infrastructure required to analyze the data. This is the closest thing we have to a classical definition. In fact, this way of thinking about what makes data “big” arguably predates the term Big Data itself. Take this figure, courtesy of Google Trends:

Notice that searches for Hadoop (a framework for massively distributed data-intensive computing) actually precede the widespread use of the term “Big Data” by a couple of years. If you’re the kind of person who likes to base their arguments entirely on search-based line graphs from Google (and I am!), you have here a rather powerful Exhibit A.

Alternatively, If you’re a more serious kind of person who privileges reason over pretty line plots, consider the following, rather simple, argument for Big Data qua infrastructure problem: any dataset that keeps growing is eventually going to get too big–meaning, it will inevitably reach a point at which it no longer fits into memory, or even onto local storage–and now requires a fundamentally different, massively parallel architecture to process. If you can solve your alleged “big data” problems by installing a new hard drive or some more RAM, you don’t really have a Big Data problem, you have an I’m-too-lazy-to-deal-with-this-right-now problem.

A real Big Data problem, from an engineering standpoint, is what happens once you’ve installed all the RAM your system can handle, maxed out your RAID array, and heavily optimized your analysis code, yet still find yourself unable to process your data in any reasonable amount of time. If you then complain to your IT staff about your computing problems and they start ranting to you about Hadoop and Hive and how you need to hire a bunch of engineers so you can build out a cluster and do Big Data the way Big Data is supposed to be done, well, congratulations–you now have a Big Data problem in the engineering sense. You now need to figure out how to build a highly distributed computing platform capable of handling really, really, large datasets.

Once the hungry wolves of Big Data have been killed off temporarily pacified by building a new data center (or, you know, paying for an AWS account), you may have to rewrite at least part of your analysis code to take advantage of the massive parallelization your new architecture affords. But conceptually, you can probably keep asking and answering the same kinds of questions with your data. In this sense, Big Data isn’t directly about the data itself, but about what the data makes you do: a dataset counts as “Big” whenever it causes you to start whispering sweet nothings in Hadoop’s ear at night. Exactly when that happens will depend on your existing infrastructure, the demands imposed by your data, and so on. On modern hardware, some people have suggested that the transition tends to happen fairly consistently when datasets get to around 5 – 10 TB in size. But of course, that’s just a loose generalization, and we all know that loose generalizations are always a terrible idea.

The scientific perspective

Defining Big Data in terms of architecture and infrastructure is all well and good in domains where normal operations regularly generate terabytes (or even–gasp–petabytes!) of data. But the reality is that most people–and even, I would argue, many people whose job title currently includes the word “data” in it–will rarely need to run analyses distributed across hundreds or thousands of nodes. If we stick with the engineering definition of Big Data, this means someone like me–a lowly social or biomedical scientist who frequently deals with “large” datasets, but almost never with gigantic ones–doesn’t get to say they do Big Data. And that seems kind of unfair. I mean, Big Data is totally in right now, so why should corporate data science teams and particle physicists get to have all the fun? If I want to say I work with Big Data, I should be able to say I work with Big Data! There’s no way I can go to APS and give talks about Big Data unless I can unashamedly look myself in the mirror and say, look at that handsome, confident man getting ready to go to APS and talk about Big Data. So it’s imperative that we find a definition of Big Data that’s compatible with the kind of work people like me do.

Hey, here’s one that works:

Big Data, n. The minimum amount of data required to make one’s peers uncomfortable with the size of one’s data.

This definition is mostly facetious–but it’s a special kind of facetiousness that’s delicately overlaid on top of an earnest, well-intentioned core. The earnest core is that, in practice, many people who think of themselves as Big Data types but don’t own a timeshare condo in Hadoop Land implicitly seem to define Big Data as any dataset large enough to enable new kinds of analyses that weren’t previously possible with smaller datasets. Exactly what dimensionality of data is sufficient to attain this magical status will vary by field, because conventional dataset sizes vary by field. For instance, in human vision research, many researchers can get away with collecting a few hundred trials from three subjects in one afternoon and calling it a study. In contrast, if you’re a population geneticist working with raw sequence data, you probably deal with fuhgeddaboudit amounts of data on a regular basis. So clearly, what it means to be in possession of a “big” dataset depends on who you are. But the point is that in every field there are going to be people who look around and say, you know what? Mine’s bigger than everyone else’s. And those are the people who have Big Data.

I don’t mean that pejoratively, mind you. Quite the contrary: an arms race towards ever-larger datasets strikes me as a good thing for most scientific fields to have, regardless of whether or not the motives for the data embigenning are perfectly cromulent. Having more data often lets you do things that you simply couldn’t do with smaller datasets. With more data, confidence intervals shrink, so effect size estimates become more accurate; it becomes easier to detect and characterize higher-order interactions between variables; you can stratify and segment the data in various ways, explore relationships with variables that may not have been of a priori interest; and so on and so forth. Scientists, by and large, seem to be prone to thinking of Big Data in these relativistic terms, so that a “Big” dataset is, roughly, a dataset that’s large enough and rich enough that you can do all kinds of novel and interesting things with it that you might not have necessarily anticipated up front. And that’s refreshing, because if you’ve spent much time hanging around science departments, you’ll know that the answer to about 20% of all questions during Q&A periods end with the words well, that’s a great idea, but we just don’t have enough data to answer that. Big Data, in a scientific sense, is when that answer changes to: hey, that’s a great idea, and I’ll try that as soon as I get back to my office. (Or perhaps more realistically: hey that’s a great idea, and I’ll be sure to try that–as soon as I can get my one tech-savvy grad student to wrangle the data into the right format.)

It’s probably worth noting in passing that this relativistic, application-centered definition of Big Data also seems to be picking up cultural steam far beyond the scientific community. Most of the recent criticisms of Big Data seem to have something vaguely like this definition in mind. (Actually, I would argue pretty strenuously that most of these criticisms aren’t really even about Big Data in this sense, and are actually just objections to mindless and uncritical exploratory analysis of any dataset, however big or small. But that’s a post for another day.)

The statistical perspective

A third way to think about Big Data is to focus on the kinds of statistical methods required in order to make sense of a dataset. On this view, what matters isn’t the size of the dataset, or the infrastructure demands it imposes, but how you use it. Once again, we can appeal to a largely facetious definition clinging for dear life onto a half-hearted effort at pithy insight:

Big Data, n: the minimal amount of data that allows you to set aside a quarter of your dataset as a hold-out and still train a model that performs reasonably well when tested out-of-sample.

The nugget of would-be insight in this case is this: the world is usually a more complicated place than it appears to be at first glance. It’s generally much harder to make reliable predictions about new (i.e., previously unseen) cases than one might suppose given conventional analysis practices in many fields of science. For example, in psychology, it’s very common to see papers report extremely large R2 values from fitted models–often accompanied by claims to the effect that the researchers were able to “predict” most of the variance in the outcome. But such claims are rarely actually supported by the data presented, because the studies in question overwhelmingly tend to overfit their models by using the same data for training and testing (to say nothing of p-hacking and other Questionable Research Practices). Fitting a model that can capably generalize to entirely new data often requires considerably more data than one might expect. The precise amount depends on the problem in question, but I think it’s fair to say that there are many domains in which problems that researchers routinely try to tackle with sample sizes of 20 – 100 cases would in reality require samples two or three orders of magnitude larger to really get a good grip on.

The key point is that when we don’t have a lot of data to work with, it’s difficult to say much of anything about how big an effect is (unless we’re willing to adopt strong priors). Instead, we tend to fall back on the crutch of null hypothesis significant testing and start babbling on about whether there is or isn’t a “statistically significant effect”. I don’t really want to get into the question of whether the latter kind of thinking is ever useful (see Krantz (1999) for a review of its long and sordid history). What I do hope is not controversial is this: if your conclusions are ever in danger of changing radically depending on whether the coefficients in your model are on this side of p = .05 versus that side of p = .05, those conclusions are, by definition, not going to be terribly reliable over the long haul. Anything that helps move us away from that decision boundary and puts us in a position where we can worry more about what our conclusions ought to be than about whether we should be saying anything at all is a good thing. And since the single thing that matters most in that regard is the size of our dataset, it follows that we should want to have datasets that are as Big as possible. If we can fit complex models using lots of features and show that those models still perform well when tested out-of-sample, we can feel much more confident about whatever else we feel inclined to say.

From a statistical perspective, then, one might say that a dataset is “Big” when it’s sufficiently large that we can spend most of our time thinking about what kinds of models to fit and what kinds of features to include so as to maximize predictive power and/or understanding, rather than worrying about what we can and can’t do with the data for fear of everything immediately collapsing into a giant multicollinear mess. Admittedly, this is more of a theoretical ideal than a practical goal, because as Andrew Gelman points out, in practice “N is never large”. As soon as we get our hands on enough data to stabilize the estimates from one kind of model, we immediately go on to ask more fine-grained questions that require even more data. And we don’t stop until we’re right back where we started, hovering at the very edge of our ability to produce sensible estimates, staring down the precipice of uncertainty. But hey, that’s okay. Nobody said these definitions have to be useful; it’s hard enough just trying to make them semi-coherent.

Conclusion

So there you have it: three ways to define Big Data. All three of these definitions are fuzzy, and will bleed into one another if you push on them a little bit. In particular, you could argue that, extensionally, the engineering definition of Big Data is a superset of the other two definitions, as it’s very likely that any dataset big enough to require a fundamentally different architecture is also big enough to handle complex statistical models and to do interesting and novel things with. So the point of all this is not to describe three completely separate communities with totally different practices; it’s simply to distinguish between three different uses of the term Big Data, all of which I think are perfectly sensible in different contexts, but that can cause communication problems when people from different backgrounds interact.

Of course, this isn’t meant to be an exhaustive catalog. I don’t doubt that there are many other potential definitions of Big Data that would each elicit enthusiastic head nods from various communities. For example, within the less technical sectors of the corporate world, there appears to be yet another fairly distinctive definition of Big Data. It goes something like this:

Big Data, n. A kind of black magic practiced by sorcerers known as quants. Nobody knows how it works, but it’s capable of doing anything.

In any case, the bottom line here is really just that context matters. If you go to APS this week, there’s a good chance you’ll stumble across many psychologists earnestly throwing the term “Big Data” around, even though they’re mostly discussing datasets that would fit snugly into a sliver of memory on a modern phone. If your day job involves crunching data at CERN or Google, this might amuse you. But the correct response, once you’re done smiling on the inside, is not, Hah! That’s not Big Data, you idiot! It should probably be something more like Hey, you talk kind of funny. You must come from a different part of the world than I do. We should get together some time and compare notes.

What we can and can’t learn from the Many Labs Replication Project

By now you will most likely have heard about the “Many Labs” Replication Project (MLRP)–a 36-site, 12-country, 6,344-subject effort to try to replicate a variety of classical and not-so-classical findings in psychology. You probably already know that the authors tested a variety of different effects–some recent, some not so recent (the oldest one dates back to 1941!); some well-replicated, others not so much–and reported successful replications of 10 out of 13 effects (though with widely varying effect sizes).

By and large, the reception of the MLRP paper has been overwhelmingly positive. Setting aside for the moment what the findings actually mean (see also Rolf Zwaan’s earlier take), my sense is that most psychologists are united in agreement that the mere fact that researchers at 36 different sites were able to get together and run a common protocol testing 13 different effects is a pretty big deal, and bodes well for the field in light of recent concerns about iffy results and questionable research practices.

But not everyone’s convinced. There now seems to be something of an incipient backlash against replication. Or perhaps not so much against replication itself as against the notion that the ongoing replication efforts have any special significance. An in press paper by Joseph Cesario makes a case for deferring independent efforts to replicate an effect until the original effect is theoretically well understood (a suggestion I disagree with quite strongly, and plan to follow up on in a separate post). And a number of people have questioned, in blog comments and tweets, what the big deal is. A case in point:

I think the charitable way to interpret this sentiment is that Gilbert and others are concerned that some people might read too much into the fact that the MLRP successfully replicated 10 out of 13 effects. And clearly, at least some journalists have; for instance, Science News rather irresponsibly reported that the MLRP “offers reassurance” to psychologists. That said, I don’t think it’s fair to characterize this as anything close to a dominant reaction, and I don’t think I’ve seen any researchers react to the MLRP findings as if the 10/13 number means anything special. The piece Dan Gilbert linked to in his tweet, far from promoting “hysteria” about replication, is a Nature News article by the inimitable Ed Yong, and is characteristically careful and balanced. Far from trumpeting the fact that 10 out of 13 findings replicated, here’s a direct quote from the article:

Project co-leader Brian Nosek, a psychologist at the Center of Open Science in Charlottesville, Virginia, finds the outcomes encouraging. “It demonstrates that there are important effects in our field that are replicable, and consistently so,“ he says. “But that doesn’t mean that 10 out of every 13 effects will replicate.“

Kahneman agrees. The study “appears to be extremely well done and entirely convincing“, he says, “although it is surely too early to draw extreme conclusions about entire fields of research from this single effort“.

Clearly, the mere fact that 10 out of 13 effects replicated is not in and of itself very interesting. For one thing (and as Ed Yong also noted in his article), a number of the effects were selected for inclusion in the project precisely because they had already been repeatedly replicated. Had the MLRP failed to replicate these effects–including, for instance, the seminal anchoring effect discovered by Kahneman and Tversky in the 1970s–the conclusion would likely have been that something was wrong with the methodology, and not that the anchoring effect doesn’t exist. So I think pretty much everyone can agree with Gilbert that we have most assuredly not learned, as a result of the MLRP, that there’s no replication crisis in psychology after all, and that roughly 76.9% of effects are replicable. Strictly speaking, all we know is that there are at least 10 effects in all of psychology that can be replicated. But that’s not exactly what one would call an earth-shaking revelation. What’s important to appreciate, however, is that the utility of the MLRP was never supposed to be about the number of successfully replicated effects. Rather, its value is tied to a number of other findings and demonstrations–some of which are very important, and have potentially big implications for the field at large. To wit:

1. The variance between effects is greater than the variance within effects.

Here’s the primary figure from the MLRP paper: Many Labs Replication Project results

Notice that the range of meta-analytic estimates for the different effect sizes (i.e., the solid green circles) is considerably larger than the range of individual estimates within a given effect. In other words, if you want to know how big a given estimate is likely to be, it’s more informative to know what effect is being studied than to know which of the 36 sites is doing the study. This may seem like a rather esoteric point, but it has important implications. Most notably, it speaks directly to the question of how much one should expect effect sizes to fluctuate from lab to lab when direct replications are attempted. If you’ve been following the controversy over the relative (non-)replicability of a number of high-profile social priming studies, you’ve probably noticed that a common defense researchers use when their findings fails to replicate is to claim that the underlying effect is very fragile, and can’t be expected to work in other researchers’ hands. What the MLRP shows, for a reasonable set of studies, is that there does not in fact appear to be a huge amount of site-to-site variability in effects. Take currency priming, for example–an effect in which priming participants with money supposedly leads them to express capitalistic beliefs and behaviors more strongly. Given a single failure to replicate the effect, one could plausibly argue that perhaps the effect was simply too fragile to reproduce consistently. But when 36 different sites all produce effects within a very narrow range–with a mean that is effectively zero–it becomes much harder to argue that the problem is that the effect is highly variable. To the contrary, the effect size estimates are remarkably consistent–it’s just that they’re consistently close to zero.

2. Larger effects show systematically greater variability.

You can see in the above figure that the larger an effect is, the more individual estimates appear to vary across sites. In one sense, this is not terribly surprising–you might already have the statistical intuition that the larger an effect is, the more reliable variance should be available to interact with other moderating variables. Conversely, if an effect is very small to begin with, it’s probably less likely that it could turn into a very large effect under certain circumstances–or that it might reverse direction entirely. But in another sense, this finding is actually quite unexpected, because, as noted above, there’s a general sense in the field that it’s the smaller effects that tend to be more fragile and heterogeneous. To the extent we can generalize from these 13 studies, these findings should give researchers some pause before attributing replication failures to invisible moderators that somehow manage to turn very robust effects (e.g., the original currency priming effect was nearly a full standard deviation in size) into nonexistent ones.

3. A number of seemingly important variables don’t systematically moderate effects.

There have long been expressions of concern over the potential impact of cultural and population differences on psychological effects. For instance, despite repeated demonstrations that internet samples typically provide data that are as good as conventional lab samples, many researchers continue to display a deep (and in my view, completely unwarranted) skepticism of findings obtained online. More reasonably, many researchers have worried that effects obtained using university students in Western nations–the so-called WEIRD samples–may not generalize to other social groups, cultures and countries. While the MLRP results are obviously not the last word on this debate, it’s instructive to note that factors like data acquisition approach (online vs. offline) and cultural background (US vs. non-US) didn’t appear to exert a systematic effect on results. This doesn’t mean that there are no culture-specific effects in psychology of course (there undoubtedly are), but simply that our default expectation should probably be that most basic effects will generalize across cultures to at least some extent.

4. Researchers have pretty good intuitions about which findings will replicate and which ones won’t.

At the risk of offending some researchers, I submit that the likelihood that a published finding will successfully replicate is correlated to some extent with (a) the field of study it falls under and (b) the journal in which it was originally published. For example, I don’t think it’s crazy to suggest that if one were to try to replicate all of the social priming studies and all of the vision studies published in Psychological Science in the last decade, the vision studies would replicate at a consistently higher rate. Anecdotal support for this intuition comes from a string of high-profile failures to replicate famous findings–e.g., John Bargh’s demonstration that priming participants with elderly concepts leads them to walk away from an experiment more slowly. However, the MLRP goes one better than anecdote, as it included a range of effects that clearly differ in their a priori plausibility. Fortuitously, just prior to publicly releasing the MLRP results, Brian Nosek asked the following question on Twitter:

Several researchers, including me, took Brian up on his offers; here are the responses:

As you can see, pretty much everyone that replied to Brian expressed skepticism about the two priming studies (#9 and #10 in Hal Pashler’s reply). There was less consensus on the third effect. (Actually, as it happens, there were actually ultimately only 2 failures to replicate–the third effect became statistically significant when samples were weighted properly.) Nonetheless, most of us picked Imagined Contact as number 3, which did in fact emerge as the smallest of the statistically significant effects. (It’s probably worth mentioning that I’d personally only heard of 4 or 5 of the 13 effects prior to reading their descriptions, so it’s not as though my response was based on a deep knowledge of prior work on these effects–I simply read the descriptions of the findings and gauged their plausibility accordingly.)

Admittedly, these are just two (or three) studies. It’s possible that the MLRP researchers just happened to pick two of the only high-profile priming studies that both seem highly counterintuitive and happen to be false positives. That said, I don’t really think these findings stand out from the mass of other counterintuitive priming studies in social psychology in any way. While we obviously shouldn’t conclude from this that no high-profile, counterintuitive priming studies will successfully replicate, the fact that a number of researchers were able to prospectively determine, with a high degree of accuracy, which effects would fail to replicate (and, among those that replicated, which were rather weak), is a pretty good sign that researchers’ intuitions about plausibility and replicability are pretty decent.

Personally, I’d love to see this principle pushed further, and formalized as a much broader tool for evaluating research findings. For example, one can imagine a website where researchers could publicly (and perhaps anonymously) register their degree of confidence in the likely replicability of any finding associated with a doi or PubMed ID. I think such a service would be hugely valuable–not only because it would help calibrate individual researchers’ intuitions and provide a sense of the field’s overall belief in an effect, but because it would provide a useful index of a finding’s importance in the event of successful replication (i.e., the authors of a well-replicated finding should probably receive more credit if the finding was initially viewed with great skepticism than if it was universally deemed rather obvious).

There are other potentially important findings in the MLRP paper that I haven’t mentioned here (see Rolf Zwaan’s blog post for additional points), but if nothing else, I hope this will help convince any remaining skeptics that this is indeed a landmark paper for psychology–even though the number of successful replications is itself largely meaningless.

Oh, there’s one last point worth mentioning, in light of the rather disagreeable tone of the debate surrounding previous replication efforts. If your findings are ever called into question by a multinational consortium of 36 research groups, this is exactly how you should respond:

Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect“, given the “vastly larger and more diverse sample“ of the MLRP. Both researchers praised the initiative.

Carter and Caruso’s attitude towards the MLRP is really exemplary; people make mistakes all the time when doing research, and shouldn’t be held responsible for the mere act of publishing incorrect findings (excepting cases of deliberate misconduct or clear negligence). What matters is, as Caruso notes, whether and to what extent one shows a willingness to update one’s beliefs in response to countervailing evidence. That’s one mark of a good scientist.

what do you get when you put 1,000 psychologists together in one journal?

I’m working on a TOP SEKKRIT* project involving large-scale data mining of the psychology literature. I don’t have anything to say about the TOP SEKKRIT* project just yet, but I will say that in the process of extracting certain information I needed in order to do certain things I won’t talk about, I ended up with certain kinds of data that are useful for certain other tangential analyses. Just for fun, I threw some co-authorship data from 2,000+ Psychological Science articles into the d3.js blender, and out popped an interactive network graph of all researchers who have published at least 2 papers in Psych Science in the last 10 years**. It looks like this:

coauthorship_graph

You can click on the image to take a closer (and interactive) look.

I don’t think this is very useful for anything right now, but if nothing else, it’s fun to drag Adam Galinsky around the screen and watch half of the field come along for the ride. There are plenty of other more interesting things one could do with this, though, and it’s also quite easy to generate the same graph for other journals, so I expect to have more to say about this later on.

 

* It’s not really TOP SEKKRIT at all–it just sounds more exciting that way.

** Or, more accurately, researchers who have co-authored at least 2 Psych Science papers with other researchers who meet the same criterion. Otherwise we’d have even more nodes in the graph, and as you can see, it’s already pretty messy.

the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo

You could be forgiven for thinking that academic psychologists have all suddenly turned into professional whistleblowers. Everywhere you look, interesting new papers are cropping up purporting to describe this or that common-yet-shady methodological practice, and telling us what we can collectively do to solve the problem and improve the quality of the published literature. In just the last year or so, Uri Simonsohn introduced new techniques for detecting fraud, and used those tools to identify at least 3 cases of high-profile, unabashed data forgery. Simmons and colleagues reported simulations demonstrating that standard exploitation of research degrees of freedom in analysis can produce extremely high rates of false positive findings. Pashler and colleagues developed a “Psych file drawer” repository for tracking replication attempts. Several researchers raised trenchant questions about the veracity and/or magnitude of many high-profile psychological findings such as John Bargh’s famous social priming effects. Wicherts and colleagues showed that authors of psychology articles who are less willing to share their data upon request are more likely to make basic statistical errors in their papers. And so on and so forth. The flood shows no signs of abating; just last week, the APS journal Perspectives in Psychological Science announced that it’s introducing a new “Registered Replication Report” section that will commit to publishing pre-registered high-quality replication attempts, irrespective of their outcome.

Personally, I think these are all very welcome developments for psychological science. They’re solid indications that we psychologists are going to be able to police ourselves successfully in the face of some pretty serious problems, and they bode well for the long-term health of our discipline. My sense is that the majority of other researchers–perhaps the vast majority–share this sentiment. Still, as with any zeitgeist shift, there are always naysayers. In discussing these various developments and initiatives with other people, I’ve found myself arguing, with somewhat surprising frequency, with people who for various reasons think it’s not such a good thing that Uri Simonsohn is trying to catch fraudsters, or that social priming findings are being questioned, or that the consequences of flexible analyses are being exposed. Since many of the arguments I’ve come across tend to recur, I thought I’d summarize the most common ones here–along with the rebuttals I usually offer for why, with one possible exception, the arguments for giving a pass to sloppy-but-common methodological practices are not very compelling.

“But everyone does it, so how bad can it be?”

We typically assume that long-standing conventions must exist for some good reason, so when someone raises doubts about some widespread practice, it’s quite natural to question the person raising the doubts rather than the practice itself. Could it really, truly be (we say) that there’s something deeply strange and misguided about using p values? Is it really possible that the reporting practices converged on by thousands of researchers in tens of thousands of neuroimaging articles might leave something to be desired? Could failing to correct for the many researcher degrees of freedom associated with most datasets really inflate the false positive rate so dramatically?

The answer to all these questions, of course, is yes–or at least, we should allow that it could be yes. It is, in principle, entirely possible for an entire scientific field to regularly do things in a way that isn’t very good. There are domains where appeals to convention or consensus make perfect sense, because there are few good reasons to do things a certain way except inasmuch as other people do them the same way. If everyone else in your country drives on the right side of the road, you may want to consider driving on the right side of the road too. But science is not one of those domains. In science, there is no intrinsic benefit to doing things just for the sake of convention. In fact, almost by definition, major scientific advances are ones that tend to buck convention and suggest things that other researchers may not have considered possible or likely.

In the context of common methodological practice, it’s no defense at all to say but everyone does it this way, because there are usually relatively objective standards by which we can gauge the quality of our methods, and it’s readily apparent that there are many cases where the consensus approach leave something to be desired. For instance, you can’t really justify failing to correct for multiple comparisons when you report a single test that’s just barely significant at p < .05 on the grounds that nobody else corrects for multiple comparisons in your field. That may be a valid explanation for why your paper successfully got published (i.e., reviewers didn’t want to hold your feet to the fire for something they themselves are guilty of in their own work), but it’s not a valid defense of the actual science. If you run a t-test on randomly generated data 20 times, you will, on average, get a significant result, p < .05, once. It does no one any good to argue that because the convention in a field is to allow multiple testing–or to ignore statistical power, or to report only p values and not effect sizes, or to omit mention of conditions that didn’t ‘work’, and so on–it’s okay to ignore the issue. There’s a perfectly reasonable question as to whether it’s a smart career move to start imposing methodological rigor on your work unilaterally (see below), but there’s no question that the mere presence of consensus or convention surrounding a methodological practice does not make that practice okay from a scientific standpoint.

“But psychology would break if we could only report results that were truly predicted a priori!”

This is a defense that has some plausibility at first blush. It’s certainly true that if you force researchers to correct for multiple comparisons properly, and report the many analyses they actually conducted–and not just those that “worked”–a lot of stuff that used to get through the filter will now get caught in the net. So, by definition, it would be harder to detect unexpected effects in one’s data–even when those unexpected effects are, in some sense, ‘real’. But the important thing to keep in mind is that raising the bar for what constitutes a believable finding doesn’t actually prevent researchers from discovering unexpected new effects; all it means is that it becomes harder to report post-hoc results as pre-hoc results. It’s not at all clear why forcing researchers to put in more effort validating their own unexpected finding is a bad thing.

In fact, forcing researchers to go the extra mile in this way would have one exceedingly important benefit for the field as a whole: it would shift the onus of determining whether an unexpected result is plausible enough to warrant pursuing away from the community as a whole, and towards the individual researcher who discovered the result in the first place. As it stands right now, if I discover an unexpected result (p < .05!) that I can make up a compelling story for, there’s a reasonable chance I might be able to get that single result into a short paper in, say, Psychological Science. And reap all the benefits that attend getting a paper into a “high-impact” journal. So in practice there’s very little penalty to publishing questionable results, even if I myself am not entirely (or even mostly) convinced that those results are reliable. This state of affairs is, to put it mildly, not A Good Thing.

In contrast, if you as an editor or reviewer start insisting that I run another study that directly tests and replicates my unexpected finding before you’re willing to publish my result, I now actually have something at stake. Because it takes time and money to run new studies, I’m probably not going to bother to follow up on my unexpected finding unless I really believe it. Which is exactly as it should be: I’m the guy who discovered the effect, and I know about all the corners I have or haven’t cut in order to produce it; so if anyone should make the decision about whether to spend more taxpayer money chasing the result, it should be me. You, as the reviewer, are not in a great position to know how plausible the effect truly is, because you have no idea how many different types of analyses I attempted before I got something to ‘work’, or how many failed studies I ran that I didn’t tell you about. Given the huge asymmetry in information, it seems perfectly reasonable for reviewers to say, You think you have a really cool and unexpected effect that you found a compelling story for? Great; go and directly replicate it yourself and then we’ll talk.

“But mistakes happen, and people could get falsely accused!”

Some people don’t like the idea of a guy like Simonsohn running around and busting people’s data fabrication operations for the simple reason that they worry that the kind of approach Simonsohn used to detect fraud is just not that well-tested, and that if we’re not careful, innocent people could get swept up in the net. I think this concern stems from fundamentally good intentions, but once again, I think it’s also misguided.

For one thing, it’s important to note that, despite all the press, Simonsohn hasn’t actually done anything qualitatively different from what other whistleblowers or skeptics have done in the past. He may have suggested new techniques that improve the efficiency with which cheating can be detected, but it’s not as though he invented the ability to report or investigate other researchers for suspected misconduct. Researchers suspicious of other researchers’ findings have always used qualitatively similar arguments to raise concerns. They’ve said things like, hey, look, this is a pattern of data that just couldn’t arise by chance, or, the numbers are too similar across different conditions.

More to the point, perhaps, no one is seriously suggesting that independent observers shouldn’t be allowed to raise their concerns about possible misconduct with journal editors, professional organizations, and universities. There really isn’t any viable alternative. Naysayers who worry that innocent people might end up ensnared by false accusations presumably aren’t suggesting that we do away with all of the existing mechanisms for ensuring accountability; but since the role of people like Simonsohn is only to raise suspicion and provide evidence (and not to do the actual investigating or firing), it’s clear that there’s no way to regulate this type of behavior even if we wanted to (which I would argue we don’t). If I wanted to spend the rest of my life scanning the statistical minutiae of psychology articles for evidence of misconduct and reporting it to the appropriate authorities (and I can assure you that I most certainly don’t), there would be nothing anyone could do to stop me, nor should there be. Remember that accusing someone of misconduct is something anyone can do, but establishing that misconduct has actually occurred is a serious task that requires careful internal investigation. No one–certainly not Simonsohn–is suggesting that a routine statistical test should be all it takes to end someone’s career. In fact, Simonsohn himself has noted that he identified a 4th case of likely fraud that he dutifully reported to the appropriate authorities only to be met with complete silence. Given all the incentives universities and journals have to look the other way when accusations of fraud are made, I suspect we should be much more concerned about the false negative rate than the false positive rate when it comes to fraud.

“But it hurts the public’s perception of our field!”

Sometimes people argue that even if the field does have some serious methodological problems, we still shouldn’t discuss them publicly, because doing so is likely to instill a somewhat negative view of psychological research in the public at large. The unspoken implication being that, if the public starts to lose confidence in psychology, fewer students will enroll in psychology courses, fewer faculty positions will be created to teach students, and grant funding to psychologists will decrease. So, by airing our dirty laundry in public, we’re only hurting ourselves. I had an email exchange with a well-known researcher to exactly this effect a few years back in the aftermath of the Vul et al “voodoo correlations” paper–a paper I commented on to the effect that the problem was even worse than suggested. The argument my correspondent raised was, in effect, that we (i.e., neuroimaging researchers) are all at the mercy of agencies like NIH to keep us employed, and if it starts to look like we’re clowning around, the unemployment rate for people with PhDs in cognitive neuroscience might start to rise precipitously.

While I obviously wouldn’t want anyone to lose their job or their funding solely because of a change in public perception, I can’t say I’m very sympathetic to this kind of argument. The problem is that it places short-term preservation of the status quo above both the long-term health of the field and the public’s interest. For one thing, I think you have to be quite optimistic to believe that some of the questionable methodological practices that are relatively widespread in psychology (data snooping, selective reporting, etc.) are going to sort themselves out naturally if we just look the other way and let nature run its course. The obvious reason for skepticism in this regard is that many of the same criticisms have been around for decades, and it’s not clear that anything much has improved. Maybe the best example of this is Gigerenzer and Sedlmeier’s 1989 paper entitled “Do studies of statistical power have an effect on the power of studies?“, in which the authors convincingly showed that despite three decades of work by luminaries like Jacob Cohen advocating power analyses, statistical power had not risen appreciably in psychology studies. The presence of such unwelcome demonstrations suggests that sweeping our problems under the rug in the hopes that someone (the mice?) will unobtrusively take care of them for us is wishful thinking.

In any case, even if problems did tend to solve themselves when hidden away from the prying eyes of the media and public, the bigger problem with what we might call the “saving face” defense is that it is, fundamentally, an abuse of taxypayers’ trust. As with so many other things, Richard Feynman summed up the issue eloquently in his famous Cargo Cult science commencement speech:

For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing–and if they don’t want to support you under those circumstances, then that’s their decision.

The fact of the matter is that our livelihoods as researchers depend directly on the goodwill of the public. And the taxpayers are not funding our research so that we can “discover” interesting-sounding but ultimately unreplicable effects. They’re funding our research so that we can learn more about the human mind and hopefully be able to fix it when it breaks. If a large part of the profession is routinely employing practices that are at odds with those goals, it’s not clear why taxpayers should be footing the bill. From this perspective, it might actually be a good thing for the field to revise its standards, even if (in the worst-case scenario) that causes a short-term contraction in employment.

“But unreliable effects will just fail to replicate, so what’s the big deal?”

This is a surprisingly common defense of sloppy methodology, maybe the single most common one. It’s also an enormous cop-out, since it pre-empts the need to think seriously about what you’re doing in the short term. The idea is that, since no single study is definitive, and a consensus about the reality or magnitude of most effects usually doesn’t develop until many studies have been conducted, it’s reasonable to impose a fairly low bar on initial reports and then wait and see what happens in subsequent replication efforts.

I think this is a nice ideal, but things just don’t seem to work out that way in practice. For one thing, there doesn’t seem to be much of a penalty for publishing high-profile results that later fail to replicate. The reason, I suspect, is that we incline to give researchers the benefit of the doubt: surely (we say to ourselves), Jane Doe did her best, and we like Jane, so why should we question the work she produces? If we’re really so skeptical about her findings, shouldn’t we go replicate them ourselves, or wait for someone else to do it?

While this seems like an agreeable and fair-minded attitude, it isn’t actually a terribly good way to look at things. Granted, if you really did put in your best effort–dotted all your i’s and crossed all your t’s–and still ended up reporting a false result, we shouldn’t punish you for it. I don’t think anyone is seriously suggesting that researchers who inadvertently publish false findings should be ostracized or shunned. On the other hand, it’s not clear why we should continue to celebrate scientists who ‘discover’ interesting effects that later turn out not to replicate. If someone builds a career on the discovery of one or more seemingly important findings, and those findings later turn out to be wrong, the appropriate attitude is to update our beliefs about the merit of that person’s work. As it stands, we rarely seem to do this.

In any case, the bigger problem with appeals to replication is that the delay between initial publication of an exciting finding and subsequent consensus disconfirmation can be very long, and often spans entire careers. Waiting decades for history to prove an influential idea wrong is a very bad idea if the available alternative is to nip the idea in the bud by requiring stronger evidence up front.

There are many notable examples of this in the literature. A well-publicized recent one is John Bargh’s work on the motor effects of priming people with elderly stereotypes–namely, that priming people with words related to old age makes them walk away from the experiment more slowly. Bargh’s original paper was published in 1996, and according to Google Scholar, has now been cited over 2,000 times. It has undoubtedly been hugely influential in directing many psychologists’ research programs in certain directions (in many cases, in directions that are equally counterintuitive and also now seem open to question). And yet it’s taken over 15 years for a consensus to develop that the original effect is at the very least much smaller in magnitude than originally reported, and potentially so small as to be, for all intents and purposes, “not real”. I don’t know who reviewed Bargh’s paper back in 1996, but I suspect that if they ever considered the seemingly implausible size of the effect being reported, they might have well thought to themselves, well, I’m not sure I believe it, but that’s okay–time will tell. Time did tell, of course; but time is kind of lazy, so it took fifteen years for it to tell. In an alternate universe, a reviewer might have said, well, this is a striking finding, but the effect seems implausibly large; I would like you to try to directly replicate it in your lab with a much larger sample first. I recognize that this is onerous and annoying, but my primary responsibility is to ensure that only reliable findings get into the literature, and inconveniencing you seems like a small price to pay. Plus, if the effect is really what you say it is, people will be all the more likely to believe you later on.

Or take the actor-observer asymmetry, which appears in just about every introductory psychology textbook written in the last 20 – 30 years. It states that people are relatively more likely to attribute their own behavior to situational factors, and relatively more likely to attribute other agents’ behaviors to those agents’ dispositions. When I slip and fall, it’s because the floor was wet; when you slip and fall, it’s because you’re dumb and clumsy. This putative asymmetry was introduced and discussed at length in a book by Jones and Nisbett in 1971, and hundreds of studies have investigated it at this point. And yet a 2006 meta-analysis by Malle suggested that the cumulative evidence for the actor-observer asymmetry is actually very weak. There are some specific circumstances under which you might see something like the postulated effect, but what is quite clear is that it’s nowhere near strong enough an effect to justify being routinely invoked by psychologists and even laypeople to explain individual episodes of behavior. Unfortunately, at this point it’s almost impossible to dislodge the actor-observer asymmetry from the psyche of most researchers–a reality underscored by the fact that the Jones and Nisbett book has been cited nearly 3,000 times, whereas the 1996 meta-analysis has been cited only 96 times (a very low rate for an important and well-executed meta-analysis published in Psychological Bulletin).

The fact that it can take many years–whether 15 or 45–for a literature to build up to the point where we’re even in a position to suggest with any confidence that an initially exciting finding could be wrong means that we should be very hesitant to appeal to long-term replication as an arbiter of truth. Replication may be the gold standard in the very long term, but in the short and medium term, appealing to replication is a huge cop-out. If you can see problems with an analysis right now that cast aspersions on a study’s results, it’s an abdication of responsibility to downplay your concerns and wait for someone else to come along and spend a lot more time and money trying to replicate the study. You should point out now why you have concerns. If the authors can address them, the results will look all the better for it. And if the authors can’t address your concerns, well, then, you’ve just done science a service. If it helps, don’t think of it as a matter of saying mean things about someone else’s work, or of asserting your own ego; think of it as potentially preventing a lot of very smart people from wasting a lot of time chasing down garden paths–and also saving a lot of taxpayer money. Remember that our job as scientists is not to make other scientists’ lives easy in the hopes they’ll repay the favor when we submit our own papers; it’s to establish and apply standards that produce convergence on the truth in the shortest amount of time possible.

“But it would hurt my career to be meticulously honest about everything I do!”

Unlike the other considerations listed above, I think the concern that being honest carries a price when it comes to do doing research has a good deal of merit to it. Given the aforementioned delay between initial publication and later disconfirmation of findings (which even in the best case is usually longer than the delay between obtaining a tenure-track position and coming up for tenure), researchers have many incentives to emphasize expediency and good story-telling over accuracy, and it would be disingenuous to suggest otherwise. No malevolence or outright fraud is implied here, mind you; the point is just that if you keep second-guessing and double-checking your analyses, or insist on routinely collecting more data than other researchers might think is necessary, you will very often find that results that could have made a bit of a splash given less rigor are actually not particularly interesting upon careful cross-examination. Which means that researchers who have, shall we say, less of a natural inclination to second-guess, double-check, and cross-examine their own work will, to some degree, be more likely to publish results that make a bit of a splash (it would be nice to believe that pre-publication peer review filters out sloppy work, but empirically, it just ain’t so). So this is a classic tragedy of the commons: what’s good for a given individual, career-wise, is clearly bad for the community as a whole.

I wish I had a good solution to this problem, but I don’t think there are any quick fixes. The long-term solution, as many people have observed, is to restructure the incentives governing scientific research in such a way that individual and communal benefits are directly aligned. Unfortunately, that’s easier said than done. I’ve written a lot both in papers (1, 2, 3) and on this blog (see posts linked here) about various ways we might achieve this kind of realignment, but what’s clear is that it will be a long and difficult process. For the foreseeable future, it will continue to be an understandable though highly lamentable defense to say that the cost of maintaining a career in science is that one sometimes has to play the game the same way everyone else plays the game, even if it’s clear that the rules everyone plays by are detrimental to the communal good.

 

Anyway, this may all sound a bit depressing, but I really don’t think it should be taken as such. Personally I’m actually very optimistic about the prospects for large-scale changes in the way we produce and evaluate science within the next few years. I do think we’re going to collectively figure out how to do science in a way that directly rewards people for employing research practices that are maximally beneficial to the scientific community as a whole. But I also think that for this kind of change to take place, we first need to accept that many of the defenses we routinely give for using iffy methodological practices are just not all that compelling.

tracking replication attempts in psychology–for real this time

I’ve written a few posts on this blog about how the development of better online infrastructure could help address and even solve many of the problems psychologists and other scientists face (e.g., the low reliability of peer review, the ‘fudge factor’ in statistical reporting, the sheer size of the scientific literature, etc.). Actually, that general question–how we can use technology to do better science–occupies a good chunk of my research these days (see e.g., Neurosynth). One question I’ve been interested in for a long time is how to keep track not only of ‘successful’ studies (i.e., those that produce sufficiently interesting effects to make it into the published literature), but also replication failures (or successes of limited interest) that wind up in researchers’ file drawers. A couple of years ago I went so far as to build a prototype website for tracking replication attempts in psychology. Unfortunately, it never went anywhere, partly (okay, mostly) because the site really sucked, and partly because I didn’t really invest much effort in drumming up interest (mostly due to lack of time). But I still think the idea is a valuable one in principle, and a lot of other people have independently had the same idea (which means it must be right, right?).

Anyway, it looks like someone finally had the cleverness, time, and money to get this right. Hal Pashler, Sean Kang*, and colleagues at UCSD have been developing an online database for tracking attempted replications of psychology studies for a while now, and it looks like it’s now in beta. PsychFileDrawer is a very slick, full-featured platform that really should–if there’s any justice in the world–provide the kind of service everyone’s been saying we need for a long time now. If it doesn’t work, I think we’ll have some collective soul-searching to do, because I don’t think it’s going to get any easier than this to add and track attempted replications. So go use it!

 

*Full disclosure: Sean Kang is a good friend of mine, so I’m not completely impartial in plugging this (though I’d do it anyway). Sean also happens to be amazingly smart and in search of a faculty job right now. If I were you, I’d hire him.

we, the people, who make mistakes–economists included

Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“:

Pop economists (or, at least, pop micro-economists) are often making one of two arguments:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with “why do they do that?“ sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with “we can do better“ claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.

The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!

Personally what I find puzzling isn’t really how to reconcile these two strands (which do seem to somehow coexist quite peacefully in pop economists’ writings); it’s how anyone–economist or otherwise–still manages to believe people are rational in any meaningful sense (and I’m not saying Andrew does; in fact, see below).

There are at least two non-trivial ways to define rationality. One is in terms of an ideal agent’s actions–i.e., rationality is what a decision-maker would choose to do if she had unlimited cognitive resources and knew all the information relevant to a given decision. Well, okay, maybe not an ideal agent, but at the very least a very smart one. This is the sense of rationality in which you might colloquially remark to your neighbor that buying lottery tickets is an irrational thing to do, because the odds are stacked against you. The expected value of buying a lottery ticket (i.e., the amount you would expect to end up with in the long run) is generally negative, so in some normative sense, you could say it’s irrational to buy lottery tickets.

This definition of irrationality is probably quite close to the colloquial usage of the term, but it’s not really interesting from an academic standpoint, because nobody (economists included) really believes we’re rational in this sense. It’s blatantly obvious to everyone that none of us really make normatively correct choices much of the time. If for no other reason than we are all somewhat lacking in the omniscience department.

What economists mean when they talk about rationality is something more technical; specifically, it’s that people manifest stationary preferences. That is, given any set of preferences an individual happens to have (which may seem completely crazy to everyone else), rationality implies that that person expresses those preferences in a consistent manner. If you like dark chocolate more than milk chocolate, and milk chocolate more than Skittles, you shouldn’t like Skittles more than dark chocolate. If you do, you’re violating the principle of transitivity, which would effectively make it impossible to model your preferences formally (since we’d have no way of telling what you’d prefer in any given situation). And that would be a problem for standard economic theory, which is based on the assumption that people are fundamentally rational agents (in this particular sense).

The reason I say it’s puzzling that anyone still believes people are rational in even this narrower sense is that decades of behavioral economics and psychology research have repeatedly demonstrated that people just don’t have consistent preferences. You can radically influence and alter decision-makers’ behavior in all sorts of ways that simply aren’t predicted or accounted for by Rational Choice Theory (RCT). I’ll give just two examples here, but there are any number of others, as many excellent books attest (e.g., Dan Ariely‘s Predictably Irrational, or Thaler and Sunstein’s Nudge).

The first example stems from famous work by Madrian and Shea (2001) investigating the effects of savings plan designs on employees’ 401(k) choices. By pretty much anyone’s account, decisions about savings plans should be a pretty big deal for most employees. The difference between opting into a 401(k) and opting out of one can easily amount to several hundred thousand dollars over the course of a lifetime, so you would expect people to have a huge incentive to make the choice that’s most consistent with their personal preferences (whether those preferences happen to be for splurging now or saving for later). Yet what Madrian and Shea convincingly showed was that most employees simply go with the default plan option. When companies switch from opt-in to opt-out (i.e., instead of calling up HR and saying you want to join the plan, you’re enrolled by default, and have to fill out a form if you want to opt out), nearly 50% more employees end up enrolled in the 401(k).

This result (and any number of others along similar lines) makes no sense under rational choice theory, because it’s virtually impossible to conceive of a consistent set of preferences that would explain this type of behavior. Many of the same employees who won’t take ten minutes out of their day to opt in or out of their 401(k) will undoubtedly drive across town to save a few dollars on their groceries; like most people, they’ll look for bargains, buy cheaper goods rather than more expensive ones, worry about leaving something for their children after they’re gone, and so on and so forth. And one can’t simply attribute the discrepancy in behavior to ignorance (i.e., “no one reads the fine print!”), because the whole point of massive incentives is that they’re supposed to incentivize you to do things like look up information that could be relevant to, oh, say, having hundreds of thousands of extra dollars in your bank account in forty years. If you’re willing to look for coupons in the sunday paper to save a few dollars, but aren’t willing to call up HR and ask about your savings plan, there is, to put it frankly, something mildly inconsistent about your preferences.

The other example stems from the enormous literature on risk aversion. The classic risk aversion finding is that most people require a higher nominal payoff on risky prospects than on safe ones before they’re willing to accept the risky prospect. For instance, most people would rather have $10 for sure than $50 with 25% probability, even though the expected value of the latter is 25% higher (an amazing return!). Risk aversion is a pervasive phenomenon, and crops up everywhere, including in financial investments, where it is known as the equity premium puzzle (the puzzle being that many investors prefer bonds to stocks even though the historical record suggests a massively higher rate of return for stocks over the long term).

From a naive standpoint, you might think the challenge risk aversion poses to rational choice theory is that risk aversion is just, you know, stupid. Meaning, if someone keeps offering you $10 with 100% probability or $50 with 25% probability, it’s stupid to keep making the former choice (which is what most people do when you ask them) when you’re going to make much more money by making the latter choice. But again, remember, economic rationality isn’t about preferences per se, it’s about consistency of preferences. Risk aversion may violate a simplistic theory under which people are supposed to simply maximize expected value at all times; but then, no one’s really believed that for  several hundred years. The standard economist’s response to the observation that people are risk averse is to observe that people aren’t maximizing expected value, they’re maximizing utility. Utility has a non-linear relationship with expected value, so that people assign different weight to the Nth+1 dollar earned than to the Nth dollar earned. For instance, the classical value function identified by Kahneman and Tversky in their seminal work (for which Kahneman won the Nobel prize in part) looks like this:

The idea here is that the average person overvalues small gains relative to larger gains; i.e., you may be more satisfied when you receive $200 than when you receive $100, but you’re not going to be twice as satisfied.

This seemed like a sufficient response for a while, since it appears to preserve consistency as the hallmark of rationality. The idea is that you can have people who have more or less curvature in their value and probability weighting functions (i.e., some people are more risk averse than others), and that’s just fine as long as those preferences are consistent. Meaning, it’s okay if you prefer $50 with 25% probability to $10 with 100% probability just as long as you also prefer $50 with 25% probability to $8 with 100% probability, or to $7 with 100% probability, and so on. So long as your preferences are consistent, your behavior can be explained by RCT.

The problem, as many people have noted, is that in actuality there isn’t any set of consistent preferences that can explain most people’s risk averse behavior. A succinct and influential summary of the problem was provided by Rabin (2000), who showed formally that the choices people make when dealing with small amounts of money imply such an absurd level of risk aversion that the only way for them to be consistent would be to reject uncertain prospects with an infinitely large payoff even when the certain payoff was only modestly larger. Put differently,

if a person always turns down a 50-50 lose $100/gain $110 gamble, she will always turn down a 50-50 lose $800/gain $2,090 gamble. … Somebody who always turns down 50-50 lose $100/gain $125 gambles will turn down any gamble with a 50% chance of losing $600.

The reason for this is simply that any concave function that crosses the points expressed by the low-magnitude prospects (e.g., a refusal to take a 50-50 bet with lose $100/gain $110 outcomes) will have to asymptote fairly quickly. So for people to have internally consistent preferences, they would literally have to be turning down infinite but uncertain payoffs for certain but modest ones. Which of course is absurd; in practice, you would have a hard time finding many people who would refuse a coin toss where they lose $600 on heads and win $$$infinity dollarz$$$ on tails. Though you might have a very difficult time convincing them you’re serious about the bet. And an even more difficult time finding infinity trucks with which to haul in those infinity dollarz in the event you lose.

Anyway, these are just two prominent examples; there are literally hundreds of other similar examples in the behavioral economics literature of supposedly rational people displaying wildly inconsistent behavior. And not just a minority of people; it’s pretty much all of us. Presumably including economists. Irrationality, as it turns out, is the norm and not the exception. In some ways, what’s surprising is not that we’re inconsistent, but that we manage to do so well despite our many biases and failings.

To return to the puzzle Andrew Gelman posed, though, I suspect Andrew’s being facetious, and doesn’t really see this as much of a puzzle at all. Here’s his solution:

The key, I believe, is that “rationality“ is a good thing. We all like to associate with good things, right? Argument 1 has a populist feel (people are rational!) and argument 2 has an elitist feel (economists are special!). But both are ways of associating oneself with rationality. It’s almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you’re celebrating the rationality of others.

This seems like a somewhat more tactful way of saying what I suspect Andrew and many other people (and probably most academic psychologists, myself included) already believe, which is that there isn’t really any reason to think that people are rational in the sense demanded by RCT. That’s not to say economics is bunk, or that it doesn’t make sense to think about incentives as a means of altering behavior. Obviously, in a great many situations, pretending that people are rational is a reasonable approximation to the truth. For instance, in general, if you offer more money to have a job done, more people will be willing to do that job. But the fact that the tenets of standard economics often work shouldn’t blind us to the fact that they also often don’t, and that they fail in many systematic and predictable ways. For instance, sometimes paying people more money makes them perform worse, not better. And sometimes it saps them of the motivation to work at all. Faced with overwhelming empirical evidence that people don’t behave as the theory predicts, the appropriate response should be to revisit the theory, or at least to recognize which situations it should be applied in and which it shouldn’t.

Anyway, that’s a long-winded way of saying I don’t think Andrew’s puzzle is really a puzzle. Economists simply don’t express their own preferences and views about consistency consistently, and it’s not surprising, because neither does anyone else. That doesn’t make them (or us) bad people; it just makes us all people.

how many Cortex publications in the hand is a Nature publication in the bush worth?

A provocative and very short Opinion piece by Julien Mayor (Are scientists nearsighted gamblers? The misleading nature of impact factors) was recently posted on the Frontiers in Psychology website (open access! yay!). Mayor’s argument is summed up nicely in this figure:

The left panel plots the mean versus median number of citations per article in a given year (each year is a separate point) for 3 journals: Nature (solid circles), Psych Review (squares), and Psych Science (triangles). The right panel plots the number of citations each paper receives in each of the first 15 years following its publication. What you can clearly see is that (a) the mean and median are very strongly related for the psychology journals, but completely unrelated for Nature, implying that a very small number of articles account for the vast majority of Nature citations (Mayor cites data indicating that up to 40% of Nature papers are never cited); and (b) Nature papers tend to get cited heavily for a year or two, and then disappear, whereas Psych Science, and particularly Psych Review, tend to have much longer shelf lives. Based on these trends, Mayor concludes that:

From this perspective, the IF, commonly accepted as golden standard for performance metrics seems to reward high-risk strategies (after all your Nature article has only slightly over 50% chance of being ever cited!), and short-lived outbursts. Are scientists then nearsighted gamblers?

I’d very much like to believe this, in that I think the massive emphasis scientists collectively place on publishing work in broad-interest, short-format journals like Nature and Science is often quite detrimental to the scientific enterprise as a whole. But I don’t actually believe it, because I think that, for any individual paper, researchers generally do have good incentives to try to publish in the glamor mags rather than in more specialized journals. Mayor’s figure, while informative, doesn’t take a number of factors into account:

  • The type of papers that gets published in Psych Review and Nature are very different. Review papers, in general, tend to get cited more often, and for a longer time. A better comparison would be between Psych Review papers and only review papers in Nature (there’s not many of them, unfortunately). My guess is that that difference alone probably explains much of the difference in citation rates later on in an article’s life. That would also explain why the temporal profile of Psych Science articles (which are also overwhelmingly short empirical reports) is similar to that of Nature. Major theoretical syntheses stay relevant for decades; individual empirical papers, no matter how exciting, tend to stop being cited as frequently once (a) the finding fails to replicate, or (b) a literature builds up around the original report, and researchers stop citing individual studies and start citing review articles (e.g., in Psych Review).
  • Scientists don’t just care about citation counts, they also care about reputation. The reality is that much of the appeal of having a Nature or Science publication isn’t necessarily that you expect the work to be cited much more heavily, but that you get to tell everyone else how great you must be because you have a publication in Nature. Now, on some level, we know that it’s silly to hold glamor mags in such high esteem, and Mayor’s data are consistent with that idea. In an ideal world, we’d read all papers ultra-carefully before making judgments about their quality, rather than using simple but flawed heuristics like what journal those papers happen to be published in. But this isn’t an ideal world, and the reality is that people do use such heuristics. So it’s to each scientist’s individual advantage (but to the field’s detriment) to take advantage of that knowledge.
  • Different fields have very different citation rates. And articles in different fields have very different shelf lives. For instance, I’ve heard that in many areas of physics, the field moves so fast that articles are basically out of date within a year or two (I have no way to verify if this is true or not). That’s certainly not true of most areas of psychology. For instance, in cognitive neuroscience, the current state of the field in many areas is still reasonably well captured by highly-cited publications that are 5 – 10 years old. Most behavioral areas of psychology seem to advance even more slowly. So one might well expect articles in psychology journals to peak later in time than the average Nature article, because Nature contains a high proportion of articles in the natural sciences.
  • Articles are probably selected for publication in Nature, Psych Science, and Psych Review for different reasons. In particular, there’s no denying the fact that Nature selects articles in large part based on the perceived novelty and unexpectedness of the result. That’s not to say that methodological rigor doesn’t play a role, just that, other things being equal, unexpected findings are less likely to be replicated. Since Nature and Science overwhelmingly publish articles with new and surprising findings, it shouldn’t be surprising if the articles in these journals have a lower rate of replication several years on (and hence, stop being cited). That’s presumably going to be less true of articles in specialist journals, where novelty factor and appeal to a broad audience are usually less important criteria.

Addressing these points would probably go a long way towards closing, and perhaps even reversing, the gap implied  by Mayor’s figure. I suspect that if you could do a controlled experiment and publish the exact same article in Nature and Psych Science, it would tend to get cited more heavily in Nature over the long run. So in that sense, if citations were all anyone cared about, I think it would be perfectly reasonable for scientists to try to publish in the most prestigious journals–even though, again, I think the pressure to publish in such journals actually hurts the field as a whole.

Of course, in reality, we don’t just care about citation counts anyway; lots of other things matter. For one thing, we also need to factor in the opportunity cost associated with writing a paper up in a very specific format for submission to Nature or Science, knowing that we’ll probably have to rewrite much or all of it before it gets published. All that effort could probably have been spent on other projects, so one way to put the question is: how many lower-tier publications in the hand is a top-tier publication in the bush worth?

Ultimately, it’s an empirical matter; I imagine if you were willing to make some strong assumptions, and collect the right kind of data, you could come up with a meaningful estimate of the actual value of a Nature publication, as a function of important variables like the number of other publications the authors had, the amount of work invested in rewriting the paper after rejection, the authors’ career stage, etc. But I don’t know of any published work to that effect; it seems like it would probably be more trouble than it was worth (or, to get meta: how many Nature manuscripts can you write in the time it takes you to write a manuscript about how many Nature manuscripts you should write?). And, to be honest, I suspect that any estimate you obtained that way would have little or no impact on the actual decisions scientists make about where to submit their manuscripts anyway, because, in practice, such decisions are driven as much by guesswork and wishful thinking as by any well-reasoned analysis. And on that last point, I speak from extensive personal experience…

the naming of things

Let’s suppose you were charged with the important task of naming all the various subdisciplines of neuroscience that have anything to do with the field of research we now know as psychology. You might come up with some or all of the following terms, in no particular order:

  • Neuropsychology
  • Biological psychology
  • Neurology
  • Cognitive neuroscience
  • Cognitive science
  • Systems neuroscience
  • Behavioral neuroscience
  • Psychiatry

That’s just a partial list; you’re resourceful, so there are probably others (biopsychology? psychobiology? psychoneuroimmunology?). But it’s a good start. Now suppose you decided to make a game out of it, and threw a dinner party where each guest received a copy of your list (discipline names only–no descriptions!) and had to guess what they thought people in that field study. If your nomenclature made any sense at all, and tried to respect the meanings of the individual words used to generate the compound words or phrases in your list, your guests might hazard something like the following guesses:

  • Neuropsychology: “That’s the intersection of neuroscience and psychology. Meaning, the study of the neural mechanisms underlying cognitive function.”
  • Biological psychology: “Similar to neuropsychology, but probably broader. Like, it includes the role of genes and hormones and kidneys in cognitive function.”
  • Neurology: “The pure study of the brain, without worrying about all of that associated psychological stuff.”
  • Cognitive neuroscience: “Well if it doesn’t mean the same thing as neuropsychology and biological psychology, then it probably refers to the branch of neuroscience that deals with how we think and reason. Kind of like cognitive psychology, only with brains!”
  • Cognitive science: “Like cognitive neuroscience, but not just for brains. It’s the study of human cognition in general.”
  • Systems neuroscience: “Mmm… I don’t really know. The study of how the brain functions as a whole system?”
  • Behavioral neuroscience: “Easy: it’s the study of the relationship between brain and behavior. For example, how we voluntarily generate actions.”
  • Psychiatry: “That’s the branch of medicine that concerns itself with handing out multicolored pills that do funny things to your thoughts and feelings. Of course.”

If this list seems sort of sensible to you, you probably live in a wonderful world where compound words mean what you intuitively think they mean, the subject matter of scientific disciplines can be transparently discerned, and everyone eats ice cream for dinner every night terms that sound extremely similar have extremely similar referents rather than referring to completely different fields of study. Unfortunately, that world is not the world we happen to actually inhabit. In our world, most of the disciplines at the intersection of psychology and neuroscience have funny names that reflect accidents of history, and tell you very little about what the people in that field actually study.

Here’s the list your guests might hand back in this world, if you ever made the terrible, terrible mistake of inviting a bunch of working scientists to dinner:

  • Neuropsychology: The study of how brain damage affects cognition and behavior. Most often focusing on the effects of brain lesions in humans, and typically relying primarily on behavioral evaluations (i.e., no large magnetic devices that take photographs of the space inside people’s skulls). People who call themselves neuropsychologists are overwhelmingly trained as clinical psychologists, and many of them work in big white buildings with a red cross on the front. Note that this isn’t the definition of neuropsychology that Wikipedia gives you; Wikipedia seems to think that neuropsychology is “the basic scientific discipline that studies the structure and function of the brain related to specific psychological processes and overt behaviors.” Nice try, Wikipedia, but that’s much too general. You didn’t even use the words ‘brain damage’, ‘lesion’, or ‘patient’ in the first sentence.
  • Biological psychology: To be perfectly honest, I’m going to have to step out of dinner-guest character for a moment and admit I don’t really have a clue what biological psychologists study. I can’t remember the last time I heard someone refer to themselves as a biological psychologist. To an approximation, I think biological psychology differs from, say, cognitive neuroscience in placing greater emphasis on everything outside of higher cognitive processes (sensory systems, autonomic processes, the four F’s, etc.). But that’s just idle speculation based largely on skimming through the chapter names of my old “Biological Psychology” textbook. What I can definitively confidently comfortably tentatively recklessly assert is that you really don’t want to trust the Wikipedia definition here, because when you type ‘biological psychology‘ into that little box that says ‘search’ on Wikipedia, it redirects you to the behavioral neuroscience entry. And that can’t be right, because, as we’ll see in a moment, behavioral neuroscience refers to something very different…
  • Neurology: Hey, look! A wikipedia entry that doesn’t lie to our face! It says neurology is “a medical specialty dealing with disorders of the nervous system. Specifically, it deals with the diagnosis and treatment of all categories of disease involving the central, peripheral, and autonomic nervous systems, including their coverings, blood vessels, and all effector tissue, such as muscle.” That’s a definition I can get behind, and I think 9 out of 10 dinner guests would probably agree (the tenth is probably drunk). But then, I’m not (that kind of) doctor, so who knows.
  • Cognitive neuroscience: In principle, cognitive neuroscience actually means more or less what it sounds like it means. It’s the study of the neural mechanisms underlying cognitive function. In practice, it all goes to hell in a handbasket when you consider that you can prefix ‘cognitive neuroscience’ with pretty much any adjective you like and end up with a valid subdiscipline. Developmental cognitive neuroscience? Check. Computational cognitive neuroscience? Check. Industrial/organizational cognitive neuroscience? Amazingly, no; until just now, that phrase did not exist on the internet. But by the time you read this, Google will probably have a record of this post, which is really all it takes to legitimate I/OCN as a valid field of inquiry. It’s just that easy to create a new scientific discipline, so be very afraid–things are only going to get messier.
  • Cognitive science: A field that, by most accounts, lives up to its name. Well, kind of. Cognitive science sounds like a blanket term for pretty much everything that has to do with cognition, and it sort of is. You have psychology and linguistics and neuroscience and philosophy and artificial intelligence all represented. I’ve never been to the annual CogSci conference, but I hear it’s a veritable orgy of interdisciplinary activity. Still, I think there’s a definite bias towards some fields at the expense of others. Neuroscientists (of any stripe), for instance, rarely call themselves cognitive scientists. Conversely, philosophers of mind or language love to call themselves cognitive scientists, and the jerk cynic in me says it’s because it means they get to call themselves scientists. Also, in terms of content and coverage, there seems to be a definite emphasis among self-professed cognitive scientists on computational and mathematical modeling, and not so much emphasis on developing neuroscience-based models (though neural network models are popular). Still, if you’re scoring terms based on clarity of usage, cognitive science should score at least an 8.5 / 10.
  • Systems neuroscience: The study of neural circuits and the dynamics of information flow in the central nervous system (note: I stole part of that definition from MIT’s BCS website, because MIT people are SMART). Systems neuroscience doesn’t overlap much with psychology; you can’t defensibly argue that the temporal dynamics of neuronal assemblies in sensory cortex have anything to do with human cognition, right? I just threw this in to make things even more confusing.
  • Behavioral neuroscience: This one’s really great, because it has almost nothing to do with what you think it does. Well, okay, it does have something to do with behavior. But it’s almost exclusively animal behavior. People who refer to themselves as behavioral neuroscientists are generally in the business of poking rats in the brain with very small, sharp, glass objects; they typically don’t care much for human beings (professionally, that is). I guess that kind of makes sense when you consider that you can have rats swim and jump and eat and run while electrodes are implanted in their heads, whereas most of the time when we study human brains, they’re sitting motionless in (a) a giant magnet, (b) a chair, or (c) a jar full of formaldehyde. So maybe you could make an argument that since humans don’t get to BEHAVE very much in our studies, people who study humans can’t call themselves behavioral neuroscientists. But that would be a very bad argument to make, and many of the people who work in the so-called “behavioral sciences” and do nothing but study human behavior would probably be waiting to thump you in the hall the next time they saw you.
  • Psychiatry: The branch of medicine that concerns itself with handing out multicolored pills that do funny things to your thoughts and feelings. Of course.

Anyway, the basic point of all this long-winded nonsense is just that, for all that stuff we tell undergraduates about how science is such a wonderful way to achieve clarity about the way the world works, scientists–or at least, neuroscientists and psychologists–tend to carve up their disciplines in pretty insensible ways. That doesn’t mean we’re dumb, of course; to the people who work in a field, the clarity (or lack thereof) of the terminology makes little difference, because you only need to acquire it once (usually in your first nine years of grad school), and after that you always know what people are talking about. Come to think of it, I’m pretty sure the whole point of learning big words is that once you’ve successfully learned them, you can stop thinking deeply about what they actually mean.

It is kind of annoying, though, to have to explain to undergraduates that, DUH, the class they really want to take given their interests is OBVIOUSLY cognitive neuroscience and NOT neuropsychology or biological psychology. I mean, can’t they read? Or to pedantically point out to someone you just met at a party that saying “the neurological mechanisms of such-and-such” makes them sound hopelessly unsophisticated, and what they should really be saying is “the neural mechanisms,” or “the neurobiological mechanisms”, or (for bonus points) “the neurophysiological substrates”. Or, you know, to try (unsuccessfully) to convince your mother on the phone that even though it’s true that you study the relationship between brains and behavior, the field you work in has very little to do with behavioral neuroscience, and so you really aren’t an expert on that new study reported in that article she just read in the paper the other day about that interesting thing that’s relevant to all that stuff we all do all the time.

The point is, the world would be a slightly better place if cognitive science, neuropsychology, and behavioral neuroscience all meant what they seem like they should mean. But only very slightly better.

Anyway, aside from my burning need to complain about trivial things, I bring these ugly terminological matters up partly out of idle curiosity. And what I’m idly curious about is this: does this kind of confusion feature prominently in other disciplines too, or is psychology-slash-neuroscience just, you know, “special”? My intuition is that it’s the latter; subdiscipline names in other areas just seem so sensible to me whenever I hear them. For instance, I’m fairly confident that organic chemists study the chemistry of Orgas, and I assume condensed matter physicists spend their days modeling the dynamics of teapots. Right? Yes? No? Perhaps my  millions thousands hundreds dozens three regular readers can enlighten me in the comments…