in which I suffer a minor setback due to hyperbolic discounting

I wrote a paper with some collaborators that was officially published today in Nature Methods (though it’s been available online for a few weeks). I spent a year of my life on this (a YEAR! That’s like 30 years in opossum years!), so go read the abstract, just to humor me. It’s about large-scale automated synthesis of human functional neuroimaging data. In fact, it’s so about that that that’s the title of the paper*. There’s also a companion website over here, which you might enjoy playing with if you like brains.

I plan to write a long post about this paper at some point in the near future, but not today. What I will do today is tell you all about why I didn’t write anything about the paper much earlier (i.e., 4 weeks ago, when it appeared online), because you seem very concerned. You see, I had grand plans for writing a very detailed and wonderfully engaging multi-part series of blog posts about the paper, starting with the background and motivation for the project (that would have been Part 1), then explaining the methods we used (Part 2), then the results (III; let’s switch to Roman numerals for effect), then some of the implications (IV), then some potential applications and future directions (V), then some stuff that didn’t make it into the paper (VI), and then, finally, a behind-the-science account of how it really all went down (VII; complete with filmed interviews with collaborators who left the project early due to creative differences). A seven-part blog post! All about one paper! It would have been longer than the article itself! And all the supplemental materials! Combined! Take my word for it, it would have been amazing.

Unfortunately, like most everyone else, I’m a much better person in the future than I am in the present; things that would take me a week of full-time work in the Now apparently take me only five to ten minutes when I plan them three months ahead of time. If you plotted my temporal discounting curve for intellectual effort, it would look like this:

So that’s why my seven-part series of blog posts didn’t debut at the same time the paper was published online a few weeks ago. In fact, it hasn’t debuted at all. At this point, my much more modest goal is just to write a single much shorter post, which will no longer be able to DEBUT, but can at least slink into the bar unnoticed while everyone else is out on the patio having a smoke. And really, I’m only doing it so I can look myself in the eye again when I look myself in the mirror. Because it turns out it’s very hard to shave your face safely if you’re not allowed to look yourself in the eye. And my labmates are starting to call me PapercutMan, which isn’t really a superpower worth having.

So yeah, I’ll write something about this paper soon. But just to play it safe, I’m not going to operationally define ‘soon’ right now.

 

* Three “that”s in a row! What are the odds! Good luck parsing that sentence!

sunbathers in America

This is fiction. Kind of. Science left for a few days and asked fiction to care for the house.


I ran into my friend, Cornelius Kipling, at the grocery store. He was ahead of me in line, holding a large eggplant and a copy of the National Enquirer. I didn’t ask about it.

I hadn’t seen Kip in six months, so went for a walk along Boulder Creek to catch up. Kip has a Ph.D. in molecular engineering from Ben-Gurion University of the Negev, and an MBA from an online degree mill. He’s the only person I know who combines an earnest desire to save the world with the scruples of a small-time mafia don. He’s an interesting person to talk as long as you remember that he gets most of his ideas out of mail-order catalogs.

“What are you working on these days,” I asked him after I’d stashed my groceries in the fridge and retrieved my wallet from his pocket. Last I’d heard Kip was involved in a minor arson case and couldn’t come within three thousand feet of any Monsanto office.

“Saving lives,” he said, in the same matter-of-fact way that a janitor will tell you he cleans bathrooms. “Small lives. Fireflies. I’m making miniature organic light-emitting diodes that save fireflies from certain death at the hands of the human industrial-industrial complex.”

“The industrial human what?”

“Exactly,” he said, ignoring the question. “We’re developing new LEDs that mimic the light fireflies give off. The purpose of the fire in fireflies, you see, is to attract mates. Bigger light, better mate. The problem is, humans have much bigger lights than fireflies. So fireflies end up trying to mate with incandescents. You turn on a light bulb outside, and pffftttt there go a dozen bugs. It’s genocide, only on a larger scale. Whereas the LEDs we’re building attract fireflies like crazy but aren’t hot enough to harm them. At worst, you’ve got a device guaranteed to start a firefly orgy when it turns on.”

“Well, that absolutely sounds like another winning venture,” I said. “Oh, hey, what happened to the robot-run dairy you were going to start?”

“The cow drowned,” he said wistfully. We spent a few moments in silence while I waited for conversational manna to rain down on my head. It didn’t.

“I didn’t mean to mock you,” I said finally. “I mean, yes, of course I meant to mock you. But with love. Not like an asshole. You know.”

“S’okay. Your sarcasm is an ephemeral, transient thing–like summer in the Yukon–but the longevity of the firefly is a matter of life and death.”

“Sure it is,” I said. “For the fireflies.”

“This is the potential impact of my work right now,” Kip said, holding his hands a foot apart, as if he were cupping a large balloon. “The oldest firefly in captivity just turned forty-one. That’s eleven years older than us. But in the wild, the average firefly only lives six weeks. Mostly because of contact with the residues of the industrial-industrial complex. Compact fluorescents, parabolic aluminized reflectors, MR halogens, Rizzuto globes, and regular old incandescents. Historically, the common firefly stood no chance against us. But now, I am its redress. I am the Genghis Khan of the Lampyridae Mongol herd. Prepare to be pillaged.”

“I think you just make this stuff up,” I said, wincing at the analogy. “I mean, I’m not one hundred percent sure. But I’m very close to one hundred percent sure.”

“Your envy of other people’s imagination is your biggest problem,” said Kip, rubbing his biceps in lazy circles through his shirt. “And my biggest problem is: I need more imaginative friends. Just this morning, in the shower, this question popped into my head, and it’s been bugging me ever since: if you could be any science fiction character, who would you be? But I can’t ask you what you think; you have no vision. You didn’t even ask me why I was checking out with nothing but an eggplant when you saw me at the grocery store.”

“It’s not a vision problem,” I said. “It’s strictly a science fiction problem. I’m just no good at it. I’ll sit down to read a Ben Bova book, and immediately my egg timer will go off, or I’ll remember I need to renew my annual subscription to Vogue. That stuff never happens when I read Jane Austen or Asterix. Plus, I have this long-standing fear that if I read a lot of sci-fi, I’ll learn too much about the future; more than is healthy for any human being to know. There are like three hundred thousand science fiction novels in print, but we only have one future between all of us. The odds are good that at least one of those novels is basically right about what will happen. I won’t even watch a ninety-minute slasher film if someone tells me ahead of time that the killer is the girl from Ipanema with the dragon tattoo; why would I want to read all that science fiction and find out that thirty years from now, sentient goats from Zorbon will land on Mt. Rushmore and enslave us all, starting with the lawyers?”

“See,” he said. “No answer. Simple question, but no answer.”

“Fine,” I said. “If I must. Hari Seldon.”

“Good. Why?”

“Because,” I said, “unlike the real world, Hari Seldon lives in a mysterious future where psychologists can actually predict people’s behavior.”

“Predicting things is not so hard,” said Kip. “Take for instance the weather. It’s like ninety-three degrees today, which means the nudists will be out in force on the rocks by the Gold Run condos. It’s the only time they have a legitimate excuse to expose their true selves.”

We walked another fifty paces.

“See?” he said, as we stepped off a bridge and rounded a corner along the path. “There they are.”

I nodded. There they were: young, old, and pantsless all over.

“Personally, I always wanted to be Superman,” Kip said as we kept walking. He traced an S through his sweat-stained shirt. “Like every other kid I guess. But then when I hit puberty, I realized being Superman is a lot of responsibility. You can’t sit naked on the rocks on a hot day. Not when you’re Superman. You can’t really do anything just for fun. You can’t punch a hole in the wall to annoy your neighbor who smokes a pack a day and makes the whole building smell like stale menthol. You can’t even use your x-ray vision to stare at his wife in the shower. You need a reason for everything you do; the citizens of Metropolis demand accountability. So instead of being Superman, I figured I’d keep the S on the chest, but make it stand for ‘Science’. And now my guiding philosophy is to go through life always performing random acts of scientific kindness but never explicitly committing to help anyone. That way I can be a fundamentally decent human being who still occasionally pops into a titty bar for a late buffet-style lunch.”

I stared at him in awe, amazed that so much light and air could stream out of one man’s ego. I think in his mind, Kip really believed that spending all of his time on personal science projects put him on the side of the angels. That St. Peter himself would one day invite him through the Pearly Gates just to hang out and compare notes on fireflies. And then of course Kip would get to tell St. Peter, “no thanks,” and march right past him into a strip club.

My mental cataloging of Kip’s character flaws was broken up by an American White Pelican growling loudly somewhere in the sky above us. It spun around a few times before divebombing into the creek–an ambivalently graceful entrance reminiscent of Greg Louganis at the ’88 Olympics. American White Pelicans aren’t supposed to plunge-dive for food, but I guess that’s the beauty of America; anyone can exercise their individuality at any given moment. You can get Superman, floating above Metropolitan landmarks, eyeing anonymous bathrooms and wishing he could use his powers for evil instead of good; Cornelius Kipling, with ideas so grand and unattainable they crush out every practical instinct in his body; and me, with my theatrical vision of myself–starring myself, as Hari Seldon, the world’s first useful psychologist!

And all of us just here for a brief flash in the goldpan of time; just temporary sunbathers in America.

“You’re overthinking things again,” Kip said from somewhere outside my head. “I can tell. You’ve got that dumb look on your face that says you think you have a really deep thought on your face. Well, you don’t. You know what, forget the books; the nudists have the right idea. Go lie on the grass and pour some goddamn sunshine on your skin. You look even whiter than I remembered.”

we, the people, who make mistakes–economists included

Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“:

Pop economists (or, at least, pop micro-economists) are often making one of two arguments:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with “why do they do that?“ sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with “we can do better“ claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.

The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!

Personally what I find puzzling isn’t really how to reconcile these two strands (which do seem to somehow coexist quite peacefully in pop economists’ writings); it’s how anyone–economist or otherwise–still manages to believe people are rational in any meaningful sense (and I’m not saying Andrew does; in fact, see below).

There are at least two non-trivial ways to define rationality. One is in terms of an ideal agent’s actions–i.e., rationality is what a decision-maker would choose to do if she had unlimited cognitive resources and knew all the information relevant to a given decision. Well, okay, maybe not an ideal agent, but at the very least a very smart one. This is the sense of rationality in which you might colloquially remark to your neighbor that buying lottery tickets is an irrational thing to do, because the odds are stacked against you. The expected value of buying a lottery ticket (i.e., the amount you would expect to end up with in the long run) is generally negative, so in some normative sense, you could say it’s irrational to buy lottery tickets.

This definition of irrationality is probably quite close to the colloquial usage of the term, but it’s not really interesting from an academic standpoint, because nobody (economists included) really believes we’re rational in this sense. It’s blatantly obvious to everyone that none of us really make normatively correct choices much of the time. If for no other reason than we are all somewhat lacking in the omniscience department.

What economists mean when they talk about rationality is something more technical; specifically, it’s that people manifest stationary preferences. That is, given any set of preferences an individual happens to have (which may seem completely crazy to everyone else), rationality implies that that person expresses those preferences in a consistent manner. If you like dark chocolate more than milk chocolate, and milk chocolate more than Skittles, you shouldn’t like Skittles more than dark chocolate. If you do, you’re violating the principle of transitivity, which would effectively make it impossible to model your preferences formally (since we’d have no way of telling what you’d prefer in any given situation). And that would be a problem for standard economic theory, which is based on the assumption that people are fundamentally rational agents (in this particular sense).

The reason I say it’s puzzling that anyone still believes people are rational in even this narrower sense is that decades of behavioral economics and psychology research have repeatedly demonstrated that people just don’t have consistent preferences. You can radically influence and alter decision-makers’ behavior in all sorts of ways that simply aren’t predicted or accounted for by Rational Choice Theory (RCT). I’ll give just two examples here, but there are any number of others, as many excellent books attest (e.g., Dan Ariely‘s Predictably Irrational, or Thaler and Sunstein’s Nudge).

The first example stems from famous work by Madrian and Shea (2001) investigating the effects of savings plan designs on employees’ 401(k) choices. By pretty much anyone’s account, decisions about savings plans should be a pretty big deal for most employees. The difference between opting into a 401(k) and opting out of one can easily amount to several hundred thousand dollars over the course of a lifetime, so you would expect people to have a huge incentive to make the choice that’s most consistent with their personal preferences (whether those preferences happen to be for splurging now or saving for later). Yet what Madrian and Shea convincingly showed was that most employees simply go with the default plan option. When companies switch from opt-in to opt-out (i.e., instead of calling up HR and saying you want to join the plan, you’re enrolled by default, and have to fill out a form if you want to opt out), nearly 50% more employees end up enrolled in the 401(k).

This result (and any number of others along similar lines) makes no sense under rational choice theory, because it’s virtually impossible to conceive of a consistent set of preferences that would explain this type of behavior. Many of the same employees who won’t take ten minutes out of their day to opt in or out of their 401(k) will undoubtedly drive across town to save a few dollars on their groceries; like most people, they’ll look for bargains, buy cheaper goods rather than more expensive ones, worry about leaving something for their children after they’re gone, and so on and so forth. And one can’t simply attribute the discrepancy in behavior to ignorance (i.e., “no one reads the fine print!”), because the whole point of massive incentives is that they’re supposed to incentivize you to do things like look up information that could be relevant to, oh, say, having hundreds of thousands of extra dollars in your bank account in forty years. If you’re willing to look for coupons in the sunday paper to save a few dollars, but aren’t willing to call up HR and ask about your savings plan, there is, to put it frankly, something mildly inconsistent about your preferences.

The other example stems from the enormous literature on risk aversion. The classic risk aversion finding is that most people require a higher nominal payoff on risky prospects than on safe ones before they’re willing to accept the risky prospect. For instance, most people would rather have $10 for sure than $50 with 25% probability, even though the expected value of the latter is 25% higher (an amazing return!). Risk aversion is a pervasive phenomenon, and crops up everywhere, including in financial investments, where it is known as the equity premium puzzle (the puzzle being that many investors prefer bonds to stocks even though the historical record suggests a massively higher rate of return for stocks over the long term).

From a naive standpoint, you might think the challenge risk aversion poses to rational choice theory is that risk aversion is just, you know, stupid. Meaning, if someone keeps offering you $10 with 100% probability or $50 with 25% probability, it’s stupid to keep making the former choice (which is what most people do when you ask them) when you’re going to make much more money by making the latter choice. But again, remember, economic rationality isn’t about preferences per se, it’s about consistency of preferences. Risk aversion may violate a simplistic theory under which people are supposed to simply maximize expected value at all times; but then, no one’s really believed that for  several hundred years. The standard economist’s response to the observation that people are risk averse is to observe that people aren’t maximizing expected value, they’re maximizing utility. Utility has a non-linear relationship with expected value, so that people assign different weight to the Nth+1 dollar earned than to the Nth dollar earned. For instance, the classical value function identified by Kahneman and Tversky in their seminal work (for which Kahneman won the Nobel prize in part) looks like this:

The idea here is that the average person overvalues small gains relative to larger gains; i.e., you may be more satisfied when you receive $200 than when you receive $100, but you’re not going to be twice as satisfied.

This seemed like a sufficient response for a while, since it appears to preserve consistency as the hallmark of rationality. The idea is that you can have people who have more or less curvature in their value and probability weighting functions (i.e., some people are more risk averse than others), and that’s just fine as long as those preferences are consistent. Meaning, it’s okay if you prefer $50 with 25% probability to $10 with 100% probability just as long as you also prefer $50 with 25% probability to $8 with 100% probability, or to $7 with 100% probability, and so on. So long as your preferences are consistent, your behavior can be explained by RCT.

The problem, as many people have noted, is that in actuality there isn’t any set of consistent preferences that can explain most people’s risk averse behavior. A succinct and influential summary of the problem was provided by Rabin (2000), who showed formally that the choices people make when dealing with small amounts of money imply such an absurd level of risk aversion that the only way for them to be consistent would be to reject uncertain prospects with an infinitely large payoff even when the certain payoff was only modestly larger. Put differently,

if a person always turns down a 50-50 lose $100/gain $110 gamble, she will always turn down a 50-50 lose $800/gain $2,090 gamble. … Somebody who always turns down 50-50 lose $100/gain $125 gambles will turn down any gamble with a 50% chance of losing $600.

The reason for this is simply that any concave function that crosses the points expressed by the low-magnitude prospects (e.g., a refusal to take a 50-50 bet with lose $100/gain $110 outcomes) will have to asymptote fairly quickly. So for people to have internally consistent preferences, they would literally have to be turning down infinite but uncertain payoffs for certain but modest ones. Which of course is absurd; in practice, you would have a hard time finding many people who would refuse a coin toss where they lose $600 on heads and win $$$infinity dollarz$$$ on tails. Though you might have a very difficult time convincing them you’re serious about the bet. And an even more difficult time finding infinity trucks with which to haul in those infinity dollarz in the event you lose.

Anyway, these are just two prominent examples; there are literally hundreds of other similar examples in the behavioral economics literature of supposedly rational people displaying wildly inconsistent behavior. And not just a minority of people; it’s pretty much all of us. Presumably including economists. Irrationality, as it turns out, is the norm and not the exception. In some ways, what’s surprising is not that we’re inconsistent, but that we manage to do so well despite our many biases and failings.

To return to the puzzle Andrew Gelman posed, though, I suspect Andrew’s being facetious, and doesn’t really see this as much of a puzzle at all. Here’s his solution:

The key, I believe, is that “rationality“ is a good thing. We all like to associate with good things, right? Argument 1 has a populist feel (people are rational!) and argument 2 has an elitist feel (economists are special!). But both are ways of associating oneself with rationality. It’s almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you’re celebrating the rationality of others.

This seems like a somewhat more tactful way of saying what I suspect Andrew and many other people (and probably most academic psychologists, myself included) already believe, which is that there isn’t really any reason to think that people are rational in the sense demanded by RCT. That’s not to say economics is bunk, or that it doesn’t make sense to think about incentives as a means of altering behavior. Obviously, in a great many situations, pretending that people are rational is a reasonable approximation to the truth. For instance, in general, if you offer more money to have a job done, more people will be willing to do that job. But the fact that the tenets of standard economics often work shouldn’t blind us to the fact that they also often don’t, and that they fail in many systematic and predictable ways. For instance, sometimes paying people more money makes them perform worse, not better. And sometimes it saps them of the motivation to work at all. Faced with overwhelming empirical evidence that people don’t behave as the theory predicts, the appropriate response should be to revisit the theory, or at least to recognize which situations it should be applied in and which it shouldn’t.

Anyway, that’s a long-winded way of saying I don’t think Andrew’s puzzle is really a puzzle. Economists simply don’t express their own preferences and views about consistency consistently, and it’s not surprising, because neither does anyone else. That doesn’t make them (or us) bad people; it just makes us all people.

amusing evidence of a lazy cut and paste job

In the course of a literature search, I came across the following abstract, from a 1990 paper titled “Taking People at Face Value: Evidence for the Kernel of Truth Hypothesis”, and taken directly from the publisher’s website:

Two studies examined the validity of impressions based on static facial appearance. In Study 1, the content of previously unacquainted classmates’ impressions of one another was assessed during the 1st, 5th, and 9th weeks of the semester. These impressions were compared with ratings of facial photographs of the participants that were provided by a separate group of unacquainted judges. Impressions based on facial appearance alone predicted impressions provided by classmates after up to 9 weeks of acquaintance. Study 2 revealed correspondences between self ratings provided by stimulus persons, and ratings of their faces provided by unacquainted judges. Mechanisms by which these links may develop are discussed.

Now fully revealed by the fire and candlelight, I was amazed more than ever to behold the transformation of Heathcliff. His countenance was much older in expression and decision of feature than Mr. Linton’s; it looked intelligent and retained no marks of former degradation. A half civilized ferocity lurked yet in the depressed brows and eyes full of black fire, but it was subdued.

 

Apparently social psychology was a much more interesting place in 1990.

Some more investigation revealed the source of the problem. Here’s the first page of the PDF:

 

So it looks to be a lazy cut and paste job on the publisher’s part rather than a looking glass into the creative world of scientific writing in the early 1990s. Which I guess is for the best, otherwise Diane S. Berry would be on the hook for plagiarizing from Wuthering Heights. And not in a subtle way either.