Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“:
Pop economists (or, at least, pop micro-economists) are often making one of two arguments:
1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.
2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.
Argument 1 is associated with “why do they do that?“ sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.
Argument 2 is associated with “we can do better“ claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.
The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!
Personally what I find puzzling isn’t really how to reconcile these two strands (which do seem to somehow coexist quite peacefully in pop economists’ writings); it’s how anyone–economist or otherwise–still manages to believe people are rational in any meaningful sense (and I’m not saying Andrew does; in fact, see below).
There are at least two non-trivial ways to define rationality. One is in terms of an ideal agent’s actions–i.e., rationality is what a decision-maker would choose to do if she had unlimited cognitive resources and knew all the information relevant to a given decision. Well, okay, maybe not an ideal agent, but at the very least a very smart one. This is the sense of rationality in which you might colloquially remark to your neighbor that buying lottery tickets is an irrational thing to do, because the odds are stacked against you. The expected value of buying a lottery ticket (i.e., the amount you would expect to end up with in the long run) is generally negative, so in some normative sense, you could say it’s irrational to buy lottery tickets.
This definition of irrationality is probably quite close to the colloquial usage of the term, but it’s not really interesting from an academic standpoint, because nobody (economists included) really believes we’re rational in this sense. It’s blatantly obvious to everyone that none of us really make normatively correct choices much of the time. If for no other reason than we are all somewhat lacking in the omniscience department.
What economists mean when they talk about rationality is something more technical; specifically, it’s that people manifest stationary preferences. That is, given any set of preferences an individual happens to have (which may seem completely crazy to everyone else), rationality implies that that person expresses those preferences in a consistent manner. If you like dark chocolate more than milk chocolate, and milk chocolate more than Skittles, you shouldn’t like Skittles more than dark chocolate. If you do, you’re violating the principle of transitivity, which would effectively make it impossible to model your preferences formally (since we’d have no way of telling what you’d prefer in any given situation). And that would be a problem for standard economic theory, which is based on the assumption that people are fundamentally rational agents (in this particular sense).
The reason I say it’s puzzling that anyone still believes people are rational in even this narrower sense is that decades of behavioral economics and psychology research have repeatedly demonstrated that people just don’t have consistent preferences. You can radically influence and alter decision-makers’ behavior in all sorts of ways that simply aren’t predicted or accounted for by Rational Choice Theory (RCT). I’ll give just two examples here, but there are any number of others, as many excellent books attest (e.g., Dan Ariely‘s Predictably Irrational, or Thaler and Sunstein’s Nudge).
The first example stems from famous work by Madrian and Shea (2001) investigating the effects of savings plan designs on employees’ 401(k) choices. By pretty much anyone’s account, decisions about savings plans should be a pretty big deal for most employees. The difference between opting into a 401(k) and opting out of one can easily amount to several hundred thousand dollars over the course of a lifetime, so you would expect people to have a huge incentive to make the choice that’s most consistent with their personal preferences (whether those preferences happen to be for splurging now or saving for later). Yet what Madrian and Shea convincingly showed was that most employees simply go with the default plan option. When companies switch from opt-in to opt-out (i.e., instead of calling up HR and saying you want to join the plan, you’re enrolled by default, and have to fill out a form if you want to opt out), nearly 50% more employees end up enrolled in the 401(k).
This result (and any number of others along similar lines) makes no sense under rational choice theory, because it’s virtually impossible to conceive of a consistent set of preferences that would explain this type of behavior. Many of the same employees who won’t take ten minutes out of their day to opt in or out of their 401(k) will undoubtedly drive across town to save a few dollars on their groceries; like most people, they’ll look for bargains, buy cheaper goods rather than more expensive ones, worry about leaving something for their children after they’re gone, and so on and so forth. And one can’t simply attribute the discrepancy in behavior to ignorance (i.e., “no one reads the fine print!”), because the whole point of massive incentives is that they’re supposed to incentivize you to do things like look up information that could be relevant to, oh, say, having hundreds of thousands of extra dollars in your bank account in forty years. If you’re willing to look for coupons in the sunday paper to save a few dollars, but aren’t willing to call up HR and ask about your savings plan, there is, to put it frankly, something mildly inconsistent about your preferences.
The other example stems from the enormous literature on risk aversion. The classic risk aversion finding is that most people require a higher nominal payoff on risky prospects than on safe ones before they’re willing to accept the risky prospect. For instance, most people would rather have $10 for sure than $50 with 25% probability, even though the expected value of the latter is 25% higher (an amazing return!). Risk aversion is a pervasive phenomenon, and crops up everywhere, including in financial investments, where it is known as the equity premium puzzle (the puzzle being that many investors prefer bonds to stocks even though the historical record suggests a massively higher rate of return for stocks over the long term).
From a naive standpoint, you might think the challenge risk aversion poses to rational choice theory is that risk aversion is just, you know, stupid. Meaning, if someone keeps offering you $10 with 100% probability or $50 with 25% probability, it’s stupid to keep making the former choice (which is what most people do when you ask them) when you’re going to make much more money by making the latter choice. But again, remember, economic rationality isn’t about preferences per se, it’s about consistency of preferences. Risk aversion may violate a simplistic theory under which people are supposed to simply maximize expected value at all times; but then, no one’s really believed that for several hundred years. The standard economist’s response to the observation that people are risk averse is to observe that people aren’t maximizing expected value, they’re maximizing utility. Utility has a non-linear relationship with expected value, so that people assign different weight to the Nth+1 dollar earned than to the Nth dollar earned. For instance, the classical value function identified by Kahneman and Tversky in their seminal work (for which Kahneman won the Nobel prize in part) looks like this:
The idea here is that the average person overvalues small gains relative to larger gains; i.e., you may be more satisfied when you receive $200 than when you receive $100, but you’re not going to be twice as satisfied.
This seemed like a sufficient response for a while, since it appears to preserve consistency as the hallmark of rationality. The idea is that you can have people who have more or less curvature in their value and probability weighting functions (i.e., some people are more risk averse than others), and that’s just fine as long as those preferences are consistent. Meaning, it’s okay if you prefer $50 with 25% probability to $10 with 100% probability just as long as you also prefer $50 with 25% probability to $8 with 100% probability, or to $7 with 100% probability, and so on. So long as your preferences are consistent, your behavior can be explained by RCT.
The problem, as many people have noted, is that in actuality there isn’t any set of consistent preferences that can explain most people’s risk averse behavior. A succinct and influential summary of the problem was provided by Rabin (2000), who showed formally that the choices people make when dealing with small amounts of money imply such an absurd level of risk aversion that the only way for them to be consistent would be to reject uncertain prospects with an infinitely large payoff even when the certain payoff was only modestly larger. Put differently,
if a person always turns down a 50-50 lose $100/gain $110 gamble, she will always turn down a 50-50 lose $800/gain $2,090 gamble. … Somebody who always turns down 50-50 lose $100/gain $125 gambles will turn down any gamble with a 50% chance of losing $600.
The reason for this is simply that any concave function that crosses the points expressed by the low-magnitude prospects (e.g., a refusal to take a 50-50 bet with lose $100/gain $110 outcomes) will have to asymptote fairly quickly. So for people to have internally consistent preferences, they would literally have to be turning down infinite but uncertain payoffs for certain but modest ones. Which of course is absurd; in practice, you would have a hard time finding many people who would refuse a coin toss where they lose $600 on heads and win $$$infinity dollarz$$$ on tails. Though you might have a very difficult time convincing them you’re serious about the bet. And an even more difficult time finding infinity trucks with which to haul in those infinity dollarz in the event you lose.
Anyway, these are just two prominent examples; there are literally hundreds of other similar examples in the behavioral economics literature of supposedly rational people displaying wildly inconsistent behavior. And not just a minority of people; it’s pretty much all of us. Presumably including economists. Irrationality, as it turns out, is the norm and not the exception. In some ways, what’s surprising is not that we’re inconsistent, but that we manage to do so well despite our many biases and failings.
To return to the puzzle Andrew Gelman posed, though, I suspect Andrew’s being facetious, and doesn’t really see this as much of a puzzle at all. Here’s his solution:
The key, I believe, is that “rationality“ is a good thing. We all like to associate with good things, right? Argument 1 has a populist feel (people are rational!) and argument 2 has an elitist feel (economists are special!). But both are ways of associating oneself with rationality. It’s almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you’re celebrating the rationality of others.
This seems like a somewhat more tactful way of saying what I suspect Andrew and many other people (and probably most academic psychologists, myself included) already believe, which is that there isn’t really any reason to think that people are rational in the sense demanded by RCT. That’s not to say economics is bunk, or that it doesn’t make sense to think about incentives as a means of altering behavior. Obviously, in a great many situations, pretending that people are rational is a reasonable approximation to the truth. For instance, in general, if you offer more money to have a job done, more people will be willing to do that job. But the fact that the tenets of standard economics often work shouldn’t blind us to the fact that they also often don’t, and that they fail in many systematic and predictable ways. For instance, sometimes paying people more money makes them perform worse, not better. And sometimes it saps them of the motivation to work at all. Faced with overwhelming empirical evidence that people don’t behave as the theory predicts, the appropriate response should be to revisit the theory, or at least to recognize which situations it should be applied in and which it shouldn’t.
Anyway, that’s a long-winded way of saying I don’t think Andrew’s puzzle is really a puzzle. Economists simply don’t express their own preferences and views about consistency consistently, and it’s not surprising, because neither does anyone else. That doesn’t make them (or us) bad people; it just makes us all people.