in praise of (lab) rotation

I did my PhD in psychology, but in a department that had close ties and collaborations with neuroscience. One of the interesting things about psychology and neuroscience programs is that they seem to have quite different graduate training models, even in cases where the area of research substantively overlaps (e.g., in cognitive neuroscience). In psychology, there seem two be two general models (at least, at American and Canadian universities; I’m not really familiar with other systems). One is that graduate students are accepted into a specific lab and have ties to a specific advisor (or advisors); the other, more common at large state schools, is that graduate students are accepted into the program (or an area within the program) as a whole, and are then given the (relative) freedom to find an advisor they want to work with. There are pros and cons to either model: the former ensures that every student has a place in someone’s lab from the very beginning of training, so that no one falls through the cracks; but the downside is that beginning students often aren’t sure exactly what they want to work on, and there are occasional (and sometimes acrimonious) mentor-mentee divorces. The latter gives students more freedom to explore their research interests, but can make it more difficult for students to secure funding, and has more of a sink-or-swim flavor (i.e., there’s less institutional support for students).

Both of these models differ quite a bit from what I take to be the most common neuroscience model, which is that students spend all or part of their first year doing a series of rotations through various labs–usually for about 2 months at a time. The idea is to expose students to a variety of different lines of research so that they get a better sense of what people in different areas are doing, and can make a more informed judgment about what research they’d like to pursue. And there are obviously other benefits too: faculty get to evaluate students on a trial basis before making a long-term commitment, and conversely, students get to see the internal workings of the lab and have more contact with the lab head before signing on.

I’ve always thought the rotation model makes a lot of sense, and wonder why more psychology programs don’t try to implement one. I can’t complain about my own training, in that I had a really great experience on both personal and professional levels in the labs I worked in; but I recognize that this was almost entirely due to dumb luck. I didn’t really do my homework very well before entering graduate school, and I could easily have landed in a department or lab I didn’t mesh well with, and spent the next few years miserable and unproductive. I’ll freely admit that I was unusually clueless going into grad school (that’s a post for another time), but I think no matter how much research you do, there’s just no way to know for sure how well you’ll do in a particular lab until you’ve spent some time in it. And most first-year graduate students have kind of fickle interests anyway; it’s hard to know when you’re 22 or 23 exactly what problem you want to spend the rest of your life (or at least the next 4 – 7 years) working on. Having people do rotations in multiple labs seems like an ideal way to maximize the odds of students (and faculty) ending up in happy, productive working relationships.

A question, then, for people who’ve had experience on the administrative side of psychology (or neuroscience) departments: what keeps us from applying a rotation model in psychology too? Are there major disadvantages I’m missing? Is the problem one of financial support? Do we think that psychology students come into graduate programs with more focused interests? Or is it just a matter of convention? Inquiring minds (or at least one of them) want to know…

what’s adaptive about depression?

Jonah Lehrer has an interesting article in the NYT magazine about a recent Psych Review article by Paul Andrews and J. Anderson Thomson. The basic claim Andrews and Thomson make in their paper is that depression is “an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems”. Lehrer’s article is, as always, engaging, and he goes out of his way to obtain some critical perspectives from other researchers not affiliated with Andrews & Thomson’s work. It’s definitely worth a read.

In reading Lehrer’s article and the original paper, two things struck me. One is that I think Lehrer slightly exaggerates the novelty of Andrews and Thomson’s contribution. The novel suggestion of their paper isn’t that depression can be adaptive under the right circumstances (I think most people already believe that, and as Lehrer notes, the idea traces back a long way); it’s that the specific adaptive purpose of depression is to facilitate solving of complex problems. I think Andrews and Thomson’s paper received a somewhat critical reception (which Lehrer discusses) not so much because people found the suggestion that depression might be adaptive objectionable, but because there are arguably more plausible things depression could have been selected for. Lehrer mentions a few:

Other scientists, including Randolph Nesse at the University of Michigan, say that complex psychiatric disorders like depression rarely have simple evolutionary explanations. In fact, the analytic-rumination hypothesis is merely the latest attempt to explain the prevalence of depression. There is, for example, the “plea for help“ theory, which suggests that depression is a way of eliciting assistance from loved ones. There’s also the “signal of defeat“ hypothesis, which argues that feelings of despair after a loss in social status help prevent unnecessary attacks; we’re too busy sulking to fight back. And then there’s “depressive realism“: several studies have found that people with depression have a more accurate view of reality and are better at predicting future outcomes. While each of these speculations has scientific support, none are sufficient to explain an illness that afflicts so many people. The moral, Nesse says, is that sadness, like happiness, has many functions.

Personally, I find these other suggestions more plausible than the Andrews and Thomson story (if still not terribly compelling). There are a variety of reasons for this (see Jerry Coyne’s twin posts for some of them, along with the many excellent comments), but one pretty big one is that is that they’re all at least somewhat more consistent with a continuity hypothesis under which many of the selection pressures that influenced the structure of the human mind have been at work in our lineage for millions of years. That’s to say, if you believe in a “signal of defeat” account, you don’t have to come up with complex explanations for why human depression is adaptive (the problem being that other mammals don’t seem to show an affinity for ruminating over complex analytical problems); you can just attribute depression to much more general selection pressures found in other animals as well.

One hypothesis I particularly like in this respect, related to the signal-of-defeat account, is that depression is essentially just a human manifestation of a general tendency toward low self-confidence and aggression. The value of low self-confidence is pretty obvious: you don’t challenge the alpha male, so you don’t get into fights; you only chase prey you think you can safely catch; and so on. Now suppose humans inherited this basic architecture from our ancestral apes. In human societies there’s still a clear potential benefit to being subservient and non-confrontational; it’s a low-risk, low-reward strategy. If you don’t bother anyone, you’re probably not going to get the girl impress the opposite sex very much, but at least you won’t get clubbed over the head by a competitor very often. So there’s a sensible argument to be made for frequency dependent selection for depression-related traits (the reason it’s likely to be frequency dependent is that if you ever had a population made up entirely of self-doubting, non-aggressive individuals, being more aggressive would probably become highly advantageous, so at some point, you’d achieve a stable equilibrium).

So where does rumination–the main focus of the Andrews and Thomson paper–come into the picture? Well, I don’t know for sure, but here’s a pretty plausible just-so story: once you evolve the capacity to reason intelligently about yourself, you now have a higher cognitive system that’s naturally going to want to understand why it feels the way it does so often. If you’re someone who feels pretty upset about things much of the time, you’re going to think about those things a lot. So… you ruminate. And that’s really all you need! Saying that depression is adaptive doesn’t require you to think of every aspect of depression (e.g., rumination) as a complex and human-specific adaptation; it seems more parsimonious to see depressive rumination as a non-adaptive by-product of a more general and (potentially) adaptive disposition to experience negative affect.  On this type of account, ruminating isn’t actually helping a depressed person solve any problems at all. In fact, you could even argue that rumination shouldn’t make you feel better, or it would defeat the very purpose of having a depressive nature in the first place. In other words, it’s entirely consistent with the basic argument that depression is adaptive under some circumstances that the very purpose of rumination might be to keep depressed people in a depressed state. I don’t have any direct evidence for this, of course; it’s a just-so story. But it’s one that is, in my opinion (a) more plausible and (b) more consistent with indirect evidence (e.g., that rumination generally doesn’t seem to make people feel better!) than the Andrews and Thomson view.

The other thing that struck me about the Andrews and Thomson paper, and to a lesser extent, Lehrer’s article, is that the focus is (intentionally) squarely on whether and why depression is adaptive from an evolutionary standpoint. But it’s not clear that the average person suffering from depression really cares, or should care, about whether their depression exists for some distant evolutionary reason. What’s much more germane to someone suffering from depression is whether their depression is actually increasing their quality of life, and in that respect, it’s pretty difficult to make a positive case. The argument that rumination is adaptive because it helps you solve complex analytical problems is only compelling if you think that those problems are really worth mulling over deeply in the first place. For most of the things that depressed people tend to ruminate over (most of which aren’t life-changing decisions, but trivial things like whether your co-workers hate you because of the unfashionable shirt you wore to work yesterday), that just doesn’t seem to be the case. So the argument becomes circular: rumination helps you solve problems that a happier person probably wouldn’t have been bothered by in the first place. Now, that isn’t to say that there aren’t some very specific environments in which depression might still be adaptive today; it’s just that there don’t seem to be very many of them. If you look at the data, it’s quite clear that, on average, depression has very negative effects. People lose friends, jobs, and the joy of life because of their depression; it’s hard to see what monumental problem-solving insight could possibly compensate for that in most cases. By way of analogy, saying that depression is adaptive because it promotes rumination seems kind of like saying that cigarettes serve an adaptive purpose because they make nicotine withdrawal go away. Well, maybe. But wouldn’t you rather not have the withdrawal symptoms to begin with?

To be clear, I’m not suggesting that we should view depression solely in pathological terms, and should entirely write off the possibility that there are some potentially adaptive aspects to depression (or personality traits that go along with it). Rather, the point is that, if you’re suffering from depression, it’s not clear what good it’ll do you to learn that some of your ancestors may have benefited from their depressive natures. (By the same token, you wouldn’t expect a person suffering from sickle-cell anemia to gain much comfort from learning that they carry two copies of a mutation that, in a heterozygous carrier, would confer a strong resistance to malaria.) Conversely, there’s a very real danger here, in the sense that, if Andrews and Thomson are wrong about rumination being adaptive, they might be telling people it’s OK to ruminate when in fact excessive rumination could be encouraging further depression. My sense is that that’s actually the received wisdom right now (i.e., much of cognitive-behavioral therapy is focused on getting depressed individuals to recognize their ruminative cycles and break out of them). So the concern is that too much publicity might be a bad thing in this case, and, far from heralding the arrival of a new perspective on the conceptualization and treatment of depression, may actually be hurting some people. Ultimately, of course, it’s an empirical matter, and certainly not one I have any conclusive answers to. But what I can quite confidently assert in the meantime is that the Lehrer article is an enjoyable read, so long as you read it with a healthy dose of skepticism.

ResearchBlogging.org
Andrews, P., & Thomson, J. (2009). The bright side of being blue: Depression as an adaptation for analyzing complex problems. Psychological Review, 116 (3), 620-654 DOI: 10.1037/a0016242

if natural selection goes, so does most everything else

Jerry Fodor and Massimo Piattelli-Palmarini have a new book out entitled What Darwin Got Wrong. The book hasto put it gentlynot been very well received (well, the creationists love it). Its central thesis is that natural selection fails as a mechanism for explaining observable differences between species, because there’s ultimately no way to conclusively determine whether a given trait was actively selected for, or if it’s just a free-rider that happened to be correlated with another trait that truly was selected for. For example, we can’t really know why polar bears are white: it could be that natural selection favored white fur because it allows the bears to blend into their surroundings better (presumably improving their hunting success), or it could be that bears with sharper teeth happen to have white fur, or that smaller, less energetic bears who need to eat less often tend to have white fur, or that a mutant population of polar bears who happened to be white also happened to have a resistance to some deadly disease that wiped out all non-white polar bears, or… you get the idea.

If this sounds like pretty silly reasoning to you, you’re not alone. Virtually all of the reviews (or at least, those written by actual scientists) have resoundingly panned Fodor and Piattelli-Palmarini for writing a book about evolution with very little apparent understanding of evolution. Since I haven’t read the book, and can’t claim much knowledge of evolutionary biology, I’m not going to weigh in with a substantive opinion, except to say that, based on the reviews I’ve read, along with an older article of Fodor’s that makes much the same argument, I don’t see any reason to disagree with the critics. The most elegant critique I’ve come across is Block and Kitcher’s review of the book in the Boston Review:

The basic problem, according to Fodor and Piattelli-Palmarini, is that the distinction between free-riders and what they ride on is “invisible to natural selection.“ Thus stated, their objection is obscure because it relies on an unfortunate metaphor, introduced by Darwin. In explaining natural selection, the Origin frequently resorts to personification: “natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest“ (emphasis added). When they talk of distinctions that are “invisible“ to selection, they continue this personification, treating selection as if it were an observer able to choose among finely graded possibilities. Central to their case is the thesis that Darwinian evolutionary theory must suppose that natural selection can make the same finely graded discriminations available to a human (or divine?) observer.

Neither Darwin, nor any of his successors, believes in the literal scrutiny of variations. Natural selection, soberly presented, is about differential success in leaving descendants. If a variant trait (say, a long neck or reduced forelimbs) causes its bearer to have a greater number of offspring, and if the variant is heritable, then the proportion of organisms with the variant trait will increase in subsequent generations. To say that there is “selection for“ a trait is thus to make a causal claim: having the trait causes greater reproductive success.

Causal claims are of course familiar in all sorts of fields. Doctors discover that obesity causes increased risk of cardiac disease; atmospheric scientists find out that various types of pollutants cause higher rates of global warming; political scientists argue that party identification is an important cause of voting behavior. In each of these fields, the causes have correlates: that is why causation is so hard to pin down. If Fodor and Piattelli-Palmarini believe that this sort of causal talk is “conceptually flawed“ or “incoherent,“ then they have a much larger opponent then Darwinism: their critique will sweep away much empirical inquiry.

This really seems to me to get at the essence of the claim, and why it’s silly. Fodor and Piattelli-Palmarini are essentially claiming that natural selection is bunk because you can never be absolutely sure that natural selection operated on the trait you think it operated on. But scientists don’t require absolute certainty to hold certain beliefs about the way the world works; we just require that those beliefs seem somewhat more plausible than other available alternatives. If you take absolute certainty as a necessary criterion for causal inference, you can’t do any kind of science, period.

It’s not just evolutionary biology that suffers; if you held psychologists to the same standards, for example, we’d be in just as much trouble, because there’s always some potential confound that might explain away a putative relation between an experimental manipulation and a behavioral difference. If nothing else, you can always blame sampling error: you might think that giving your subjects 200 mg of caffeine was what caused them to have to go to the bathroom every fifteen minutes report decreased levels of subjective fatigue, but maybe you just happened to pick a particularly sleep-deprived control group. That’s surely no less plausible an explanation than some of the alternative accounts for the whiteness of the polar bear suggested above. But if you take this type of argument seriously, you can pretty much throw any type of causal inference (and hence, most science) out the window. So it’s hardly surprising that Fodor and Piattelli-Palmarini’s new book hasn’t received a particularly warm reception. Most of the critics are under the impression that science is a pretty valuable enterprise, and seems to work reasonably well most of the time, despite the rampant uncertainty that surrounds most causal inferences.

Lest you think there must be some subtlety to Fodor’s argument the critics have missed, or that there’s some knee-jerk defensiveness going on on the part of, well, damned near every biologist who’s cared to comment, I leave you with this gem, from a Salon interview with Fodor (via Jerry Coyne):

Creationism isn’t the only doctrine that’s heavily into post-hoc explanation. Darwinism is too. If a creature develops the capacity to spin a web, you could tell a story of why spinning a web was good in the context of evolution. That is why you should be as suspicious of Darwinism as of creationism. They have spurious consequence in common. And that should be enough to make you worry about either account.

I guess if you really believed that every story you could come up with about web-spinning was just as good as any other, and that there was no way to discriminate between them empirically (a notion Coyne debunks), this might seem reasonable. But then, you can always make up just-so stories to fit any set of facts. If you don’t allow for the fact that some stories have better evidential support than others, you indeed have no way to discriminate creationism from science. But I think it’s a sad day if Jerry Fodor–who’s made several seminal contributions to cognitive science and the philosophy of science–really believes that.

what do personality psychology and social psychology actually have in common?

Is there a valid (i.e., non-historical) reason why personality psychology and social psychology are so often lumped together as one branch of psychology? There are PSP journals, PSP conferences, PSP brownbags… the list goes on. It all seems kind of odd considering that, in some ways, personality psychologists and social psychologists have completely opposite focuses (foci?). Personality psychologists are all about the consistencies in people’s behavior, and classify situational variables under “measurement error”; social psychologists care not one whit for traits, and are all about how behavior is influenced by the situation. Also, aside from the conceptual tension, I’ve often gotten the sense that personality psychologists and social psychologists often just don’t like each other very much. Which I guess would make sense if you think these are two relatively distinct branches of psychology that, for whatever reason, have been lumped together inextricably for several decades. It’s kind of like being randomly assigned a roommate in college, except that you have to live with that roommate for the rest of your life.

I’m not saying there aren’t ways in which the two disciplines overlap. There are plenty of similarities; for example, they both tend to heavily feature self-report, and both often involve the study of social behavior. But that’s not really a good enough reason to lump them together. You can take almost any two branches of psychology and find a healthy intersection. For example, the interface between social psychology and cognitive psychology is one of the hottest areas of research in psychology at the moment. There’s a journal called Social Cognition–which, not coincidentally, is published by the International Social Cognition Network. Lots of people are interested in applying cognitive psychology models to social psychological issues. But you’d probably be taking bullets from both sides of the hallway if you ever suggested that your department should combine their social psychology and cognitive psychology brown bag series. Sure, there’s an overlap, but there’s also far more content that’s unique to each discipline.

The same is true for personality psychology and social psychology, I’d argue. Many (most?) personality psychologists aren’t intrinsically interested in social aspects of personality (at least, no more so than in other, non-social aspects), and many social psychologists couldn’t give a rat’s ass about the individual differences that make each of us a unique and special flower. And yet there we sit, week after week, all together in the same seminar room, as one half of the audience experiences rapture at the speaker’s words, and the other half wishes they could be slicing blades of grass off their lawn with dental floss. What gives?

the OKCupid guide to dating older women

Continuing along on their guided tour of Data I Wish I Had Access To, the OKCupid folks have posted another set of interesting figures on their blog. This time, they make the case for dating older women, suggesting that men might get more bang for their buck (in a literal sense, I suppose) by trying to contact women their age or older, rather than trying to hit on the young ‘uns. Men, it turns out, are creepy. Here’s how creepy:

Actually, that’s not so creepy. All it says is that men say they prefer to date younger women. That’s not going to shock anyone. This one is creepier:

The reason it’s creepy is that it basically says that, irrespective of what age ranges men say they find acceptable in a potential match, they’re actually all indiscriminately messaging 18-year old women. So basically, if you’re a woman on OKCupid who’s searching for that one special, non-creepy guy, be warned: they don’t exist. They’re pretty much all going to be eying 18-year olds for the rest of their lives. (To be fair, women also show a tendency to contact men below their lowest reported acceptable age. But it’s a much weaker effect; 40-year old women only occasionally try to hit on 24-year old guys, and tend to stay the hell away from the not-yet-of-drinking-age male population.)

Anyway, using this type of data, the OKCupid folks then generate this figure:

…which also will probably surprise no one, as it basically says women are most desirable when they’re young, and men when they’re (somewhat) older. But what the OKCupid folks then suggest is that it would be to men’s great advantage to broaden their horizons, because older women (which, in their range-restricted population, basically means anything over 30) self-report being much more interested in having sex more often, having casual sex, and using protection. I won’t bother hotlinking to all of those images, but here’s where they’re ultimately going with this:

I’m not going to comment on the appropriateness of trying to nudge one’s male userbase in the direction of more readily available casual sex (though I suspect they don’t need much nudging anyway). What I do wonder is to what extent these results reflect selection effects rather than a genuine age difference. The OKCupid folks suggest that women’s sexual interest increases as they age, which seems plausible given the conventional wisdom that women peak sexually in their 30s. But the effects in this case look pretty huge (unless the color scheme is misleading, which it might be; you’ll have to check out the post for the neat interactive flash animations), and it seems pretty plausible that much of the age effect could be driven by selection bias. Women with a more monogamous orientation are probably much more likely to be in committed, stable relationships by the time they turn 30 or 35, and probably aren’t scanning OKCupid for potential mates. Women who are in their 30s and 40s and still using online dating services are probably those who weren’t as interested in monogamous relationships to begin with. (Of course, the same is probably true of older men. Except that since men of all ages appear to be pretty interested in casual sex, there’s unlikely to be an obvious age differential.)

The other thing I’m not clear on is whether these analyses control for the fact that the userbase is heavily skewed toward younger users:

The people behind OKCupid are all mathematicians by training, so I’d be surprised if they hadn’t taken the underlying age distribution into consideration. But they don’t say anything about it in their post. The worry is that, if the base rate of different age groups isn’t taken into consideration, the heat map displayed above could be quite misleading. Given that there are many, many more 25-year old women on OKCupid than 35-year old women, failing to normalize properly would almost invariably make it look like there’s a heavy skew for men to message relatively younger women, irrespective of the male sender’s age. By the same token, it’s not clear that it’d be good advice to tell men to seek out older women, given that there are many fewer older women in the pool to begin with. As a thought experiment, suppose that the entire OKCupid male population suddenly started messaging women 5 years older than them, and entirely ignored their usual younger targets. The hit rate wouldn’t go up; it would probably actually fall precipitously, since there wouldn’t be enough older women to keep all the younger men entertained (at least, I certainly hope there wouldn’t). No doubt there’s a stable equilibrium point somewhere, where men and women are each targeting exactly the right age range to maximize their respective chances. I’m just not sure that it’s in OKCupid’s proposed “zone of greatness” for the men.

It’s also a bit surprising that OKCupid didn’t break down the response rate to people of the opposite gender as a function of the sender and receiver’s age. They’ve done this in the past, and it seems like the most direct way of testing whether men are more likely to get lucky by messaging older or younger women. Without knowing whether older women are actually responding to younger men’s overtures, it’s kind of hard to say what it all means. Except that I’d still kill to have their data.

Feynman’s first principle: on the virtue of changing one’s mind

As an undergraduate, I majored in philosophy. Actually, that’s not technically true: I came within one credit of double-majoring in philosophy and psychology, but I just couldn’t bring myself to take one more ancient philosophy course (a requirement for the major), so I ended up majoring in psychology and minoring in philosophy. But I still had to read a lot of philosophy, and one of my favorite works was Hilary Putnam’s Representation and Reality. The reason I liked it so much had nothing to do with the content (which, frankly, I remember nothing of), and everything to do with the introduction. Hilary Putnam was notorious for changing his mind about his ideas, a practice he defended this way in the introduction to Representation and Reality:

In this book I shall be arguing that the computer analogy, call it the “computational view of the mind,” or “functionalism,” or what you will, does not after all answer the question we philosophers (along with many cognitive scientists) want to answer, the question “What is the nature of mental states?” I am thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced. Strangely enough, there are philosophers who criticize me for doing this. The fact that I change my mind in philosophy has been viewed as a character defect. When I am lighthearted, I retort that it might be that I change my mind because I make mistakes, and that other philosophers don’t change their minds because they simply never make mistakes.

It’s a poignant way of pointing out the absurdity of a view that seemed to me at the time much too common in philosophy (and, which, I’ve since discovered, is also fairly common in science): that changing your mind is a bad thing, and conversely, that maintaining a consistent position on important issues is a virtue. I’ve never really understood this, since, by definition, any time you have at least two people with incompatible views in the same room, the odds must be at least 50% that any given view expressed at random must be wrong. In science, of course, there are rarely just two explanations for a given phenomenon. Ask 10 cognitive neuroscientists what they think the anterior cingulate cortex does, and you’ll probably get a bunch of different answers (though maybe not 10 of them). So the odds of any one person being right about anything at any given point in time are actually not so good. If you’re honest with yourself about that, you’re forced to conclude not only that most published research findings are false, but also that the vast majority of theories that purport to account for large bodies of evidence are false–or at least, wrong in some important ways.

The fact that we’re usually wrong when we make scientific (or philosophical) pronouncements isn’t a reason to abandon hope and give up doing science, of course; there are shades of accuracy, and even if it’s not realistic to expect to be right much of the time, we can at least strive to be progressively less wrong. The best expression of this sentiment that I know of an Isaac Asimov essay entitled The Relativity of Wrong. Asimov was replying to a letter from a reader who took offense to the fact that Asimov, in one of his other essays, “had expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”:

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. “If I am the wisest man,” said Socrates, “it is because I alone know that I know nothing.” the implication was that I was very foolish because I was under the impression I knew a great deal.

My answer to him was, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

The point being that scientific progress isn’t predicated on getting it right, but on getting it more right. Which seems reassuringly easy, except that that still requires us to change our minds about the things we believe in on occasion, and that’s not always a trivial endeavor.

In the years since reading Putnam’s introduction, I’ve come across a number of other related sentiments. One comes  from Richard Dawkins, in a fantastic 1996 Edge talk:

A formative influence on my undergraduate self was the response of a respected elder statesmen of the Oxford Zoology Department when an American visitor had just publicly disproved his favourite theory. The old man strode to the front of the lecture hall, shook the American warmly by the hand and declared in ringing, emotional tones: “My dear fellow, I wish to thank you. I have been wrong these fifteen years.” And we clapped our hands red. Can you imagine a Government Minister being cheered in the House of Commons for a similar admission? “Resign, Resign” is a much more likely response!

Maybe I’m too cynical, but I have a hard time imagining such a thing happening at any talk I’ve ever attended. But I’d like to believe that if it did, I’d also be clapping myself red.

My favorite piece on this theme, though, is without a doubt Richard Feyman’s “Cargo Cult Science” 1974 commencement address at Caltech. If you’ve never read it, you really should; it’s a phenomenally insightful, and simultaneously entertaining, assessment of the scientific process:

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

A little further along, Feynman is even more succinct, offering what I’d say might be the most valuable piece of scientific advice I’ve come across:

The first principle is that you must not fool yourself–and you are the easiest person to fool.

I really think this is the first principle, in that it’s the one I apply most often when analyzing data and writing up papers for publication. Am I fooling myself? Do I really believe the finding, irrespective of how many zeros the p value happens to contain? Or are there other reasons I want to believe the result (e.g., that it tells a sexy story that might make it into a high-impact journal) that might trump its scientific merit if I’m not careful? Decision rules abound in science–the most famous one in psychology being the magical p < .05 threshold. But it’s very easy to fool yourself into believing things you shouldn’t believe when you allow yourself to off-load your scientific conscience onto some numbers in a spreadsheet. And the more you fool yourself about something, the harder it becomes to change your mind later on when you come across some evidence that contradicts the story you’ve sold yourself (and other people).

Given how I feel about mind-changing, I suppose I should really be able to point to cases where I’ve changed my own mind about important things. But the truth is that I can’t think of as many as I’d like. Which is to say, I worry that the fact that I still believe so many of the things I believed 5 or 10 years ago means I must be wrong about most of them. I’d actually feel more comfortable if I changed my mind more often, because then at least I’d feel more confident that I was capable of evaluating the evidence objectively and changing my beliefs when change was warranted. Still, there are at least a few ideas I’ve changed my mind about, some of them fairly big ones. Here are a few examples of things I used to believe and don’t any more, for scientific reasons:

  • That libertarianism is a reasonable ideology. I used to really believe that people would be happiest if we all just butted out of each other’s business and gave each other maximal freedom to govern our lives however we see fit. I don’t believe that any more, because any amount of empirical evidence has convinced me that libertarianism just doesn’t (and can’t) work in practice, and is a worldview that doesn’t really have any basis in reality. When we’re given more information and more freedom to make our choices, we generally don’t make better decisions that make us happier; in fact, we often make poorer decisions that make us less happy. In general, human beings turn out to be really outstandingly bad at predicting the things that really make us happy–or even evaluating how happy the things we currently have make us. And the notion of personal responsibility that libertarians stress turns out to have very limited applicability in practice, because so much of the choices we make aren’t under our direct control in any meaningful sense (e.g., because the bulk of variance in our cognitive abilities and personalities are inherited from our parents, or because subtle contextual cues influence our choices without our knowledge, and often, to our detriment). So in the space of just a few years, I’ve gone from being a libertarian to basically being a raving socialist. And I’m not apologetic about that, because I think it’s what the data support.
  • That we should stress moral education when raising children. The reason I don’t believe this any more is much the same as the above: it turns out that children aren’t blank slates to be written on as we see fit. The data clearly show that post-conception, parents have very limited capacity to influence their children’s behavior or personality. So there’s something to be said for trying to provide an environment that makes children basically happy rather than one that tries to mould them into the morally upstanding little people they’re almost certain to turn into no matter what we do or don’t do.
  • That DLPFC is crucially involved in some specific cognitive process like inhibition or maintenance or manipulation or relational processing or… you name it. At various points in time, I’ve believed a number of these things. But for reasons I won’t go into, I now think the best characterization is something very vague and non-specific like “abstract processing” or “re-representation of information”. That sounds unsatisfying, but no one said the truth had to be satisfying on an intuitive level. And anyway, I’m pretty sure I’ll change my view about this many more times in future.
  • That there’s a general factor of intelligence. This is something I’ve been meaning to write about here for a while now (UPDATE: and I have now, here), and will hopefully get around to soon. But if you want to know why I don’t think g is real, read this explanation by Cosma Shalizi, which I think presents a pretty open-and-shut case.

That’s not a comprehensive list, of course; it’s just the first few things I could think of that I’ve changed my mind about. But it still bothers me a little bit that these are all things that I’ve never taken a public position on in any published article (or even on this blog). After all, it’s easy to change your mind when no one’s watching. Ego investment usually stems from telling other people what you believe, not from thinking out loud to yourself when you’re pacing around the living room. So I still worry that the fact I’ve never felt compelled to say “I used to think… but I now think” about any important idea I’ve asserted publicly means I must be fooling myself. And if there’s one thing that I unfailingly believe, it’s that I’m the easiest person to fool…

[For another take on the virtues of mind-changing, see Mark Crislip’s “Changing Your Mind“, which provided the impetus for this post.]

elsewhere on the internets…

Some stuff I’ve found interesting in the last week or two:

Nicholas Felton released his annual report of… himself. It’s a personal annual report on Felton, as seen through the eyes of a bunch of friends, family, and strangers:

Each day in 2009, I asked every person with whom I had a meaningful encounter to submit a record of this meeting through an online survey. These reports form the heart of the 2009 Annual Report. From parents to old friends, to people I met for the first time, to my dentist“¦ any time I felt that someone had discerned enough of my personality and activities, they were given a card with a URL and unique number to record their experience.

You probably don’t much care about Nicholas Felton’s relationships, moods, or diet, but it’s a neat idea that’s really well executed. And it looks great [via Flowing Data].

Hackademe is a serialized novel about a man, with an axe, who dislikes professors enough to take them out behind the wood shed and… alright, no, it’s actually “a website devoted to sharing clever uses of technology, software, or modified items to solve problems related to information overload, time management, organization, productivity, and other challenges faced by academics on a daily basis.” Which is pretty cool, except that I have trouble seeing the word “hackademic” in a positive light…

The UK’s General Medical Council finally laid the smack down on the ethically-challenged Andrew Wakefield–he of “vaccines cause autism, and here’s a terrible and possibly fraudulent study to prove it” fame. There’s a very long but very good write-up of the whole debacle here. Unfortunately, the reprimand is really just symbolic at this point, because Wakefield now lives in the US, and isn’t (officially) practicing medicine any more. Instead, he spends his days pumping autistic children full of laxatives. I wish I were joking.

The Neuroskeptic has had a string of great posts in the last couple of weeks. I particularly enjoyed this one, wherein he exposes reveals himself to be an expert on all matters sexual, dopaminergic, and British.

According to a study in Nature, running barefoot may be better for our feet than running in shoes. Turns out that barefoot runners strike the ground with the middle or ball of the foot, greatly reducing the force of impact. This may explain why so many (shod) runners get injured every year, and is supposed to make sense to you if you’re one of those evolutionist folks who think humans evolved to run long distances over the course of millions of years. But since you and I both know god created shoes around the same time he was borrowing Adam’s ribs, we can dispense with that sort of silliness.

The Census Bureau has some ‘splaining to do. Over at Freakonomics, Justin Wolfers discusses a new paper that uncovers massive (and inadvertent) problems with large chunks of census data. The fact that the census bureau screwed up isn’t terribly surprising (though it does call a number of published findings into question); everyone who works with data makes mistakes now and then, and the Census Bureau works with most data than most people. What is surprising is that Census has apparently refused to correct the problem, which is going to leave a lot of people hanging.

Slime mold has evolved the capacity to plan metropolitan transit systems! So claims a study in last week’s issue of Science. Ok, that’s not exactly what the article shows. What Tero et al. do show is that slime mold naturally forms networks that have a structure with comparable efficiency to the Tokyo rail system. Which, if you think about it, kind of does mean that slime mold has the capacity to plan metropolitan transit systems.

Projection Point is a neat website that measures something its creators term your “Risk Intelligence Quotient”. What’s interesting is that the site measures meta-cognitive judgments about risk rather than risk attitudes. In other words, it measures how much you know about how much you know, rather than how much you know. If that sounds confusing, spend 5 minutes answering 50 questions, and all will be made clear.

Pete Warden wants to divide up the US into 7 distinct chunks. Or at least, he wants to tell you how FaceBook thinks the US should be divided up, based on social connections between people in different geographic locations. There’s Stayathomia, Mormonia, and Socialistan. (Names have been deliberately altered to protect the guilty states.)

Each day in 2009, I asked every person with whom I had a meaningful encounter to submit a record of this meeting through an online survey. These reports form the heart of the 2009 Annual Report. From parents to old friends, to people I met for the first time, to my dentist“¦ any time I felt that someone had discerned enough of my personality and activities, they were given a card with a URL and unique number to record their experience.

the fifty percent sleeper

That’s the title of a short fiction piece I have up at lablit.com today; it’s about brain scanning and beef jerky, among other things. It starts like this:

Day 1, 6 a.m.

Ok, I’m locked into this place now. I’ve got ten pounds of beef jerky, fifty dollars for the vending machine, and a flash drive full of experiments to run. If I can get eighteen usable subjects’ worth of data in five days, Yezerski mows my lawn, does my dishes for a week, and walks my dog three times a week for two months. If I don’t get eighteen subjects done, I mow his lawn, do his dishes, and drive his disabled grandmother to physiotherapy once a week for six months. Also: if I don’t get any subjects scanned, I have to tattoo Yezerski’s grandmother’s name on my back in 50-point font. We both know it’s not going to come to that, but Yezerski insisted we make it a part of the bet anyway.

And then goes on in a similar vein. You might enjoy it if you like MRI machines and cerebellums. If you don’t care for brains, you’ll probably just find it silly.

internet use causes depression! or not.

I have a policy of not saying negative things about people (or places, or things) on this blog, and I think I’ve generally been pretty good about adhering to that policy. But I also think it’s important for scientists to speak up in cases where journalists or other scientists misrepresent scientific research in a way that could have a potentially large impact on people’s behavior, and this is one of those cases. All day long, media outlets have been full of reports about a new study that purportedly reveals that the internet–that most faithful of friends, always just a click away with its soothing, warm embrace–has a dark side: using it makes you depressed!

In fairness, most of the stories have been careful to note that the  study only “links” heavy internet use to depression, without necessarily implying that internet use causes depression. And the authors acknowledge that point themselves:

“While many of us use the Internet to pay bills, shop and send emails, there is a small subset of the population who find it hard to control how much time they spend online, to the point where it interferes with their daily activities,” said researcher Dr. Catriona Morrison, of the University of Leeds, in a statement. “Our research indicates that excessive Internet use is associated with depression, but what we don’t know is which comes first. Are depressed people drawn to the Internet or does the Internet cause depression?”

So you might think all’s well in the world of science and science journalism. But in other places, the study’s authors weren’t nearly so circumspect. For example, the authors suggest that 1.2% of the population can be considered addicted to the internet–a rate they claim is double that of compulsive gambling; and they suggest that their results “feed the public speculation that overengagement in websites that serve/replace a social function might be linked to maladaptive psychological functioning,” and “add weight to the recent suggestion that IA should be taken seriously as a distinct psychiatric construct.”

These are pretty strong claims; if the study’s findings are to be believed, we should at least be seriously considering the possibility that using the internet is making some of us depressed. At worst, we should be diagnosing people with internet addiction and doing… well, presumably something to treat them.

The trouble is that it’s not at all clear that the study’s findings should be believed. Or at least, it’s not clear that they really support any of the statements made above.

Let’s start with what the study (note: restricted access) actually shows. The authors, Catriona Morrison and Helen Gore (M&G), surveyed 1,319 subjects via UK-based social networking sites. They had participants fill out 3 self-report measures: the Internet Addiction Test (IAT), which measures dissatisfaction with one’s internet usage; the Internet Function Questionnaire, which asks respondents to indicate the relative proportion of time they spend on different internet activities (e.g., e-mail, social networking, porn, etc.); and the Beck Depression Inventory (BDI), a very widely-used measure of depression.

M&G identify a number of findings, three of which appear to support most of their conclusions. First, they report a very strong positive correlation (r = .49) between internet addiction and depression scores; second, they identify a small group of 18 subjects (1.2%) who they argue qualify as internet addicts (IA group) based on their scores on the IAT; and third, they suggest that people who used the internet more heavily “spent proportionately more time on online gaming sites, sexually gratifying websites, browsing, online communities and chat sites.”

These findings may sound compelling, but there are a number of methodological shortcomings of the study that make them very difficult to interpret in any meaningful way. As far as I can tell, none of these concerns are addressed in the paper:

First, participants were recruited online, via social networking sites. This introduces a huge selection bias: you can’t expect to obtain accurate estimates of how much, and how adaptively, people use the internet by sampling only from the population of internet users! It’s the equivalent of trying to establish cell phone usage patterns by randomly dialing only land-line numbers. Not a very good idea. And note that, not only could the study not reach people who don’t use the internet, but it was presumably also more likely to oversample from heavy internet users. The more time a person spends online, the greater the chance they’d happen to run into the authors recruitment ad. People who only check their email a couple of times a week would be very unlikely to participate in the study. So the bottom line is, the 1.2% figure the authors arrive at is almost certainly a gross overestimate. The true proportion of people who meet the authors’ criteria for internet addiction is probably much lower. It’s hard to believe the authors weren’t aware of the issue of selection bias, and the massive problem it presents for their estimates, yet they failed to mention it anywhere in their paper.

Second, the cut-off score for being placed in the IA group appears to be completely arbitrary. The Internet Addiction Test itself was developed by Kimberly Young in a 1998 book entitled “Caught in the Net: How to Recognize the Signs of Internet Addiction–and a Winning Strategy to Recovery”. The test was introduced, as far as I can tell (I haven’t read the entire book, just skimmed it in Google Books), with no real psychometric validation. The cut-off of 80 points out of a maximum 100 possible as a threshold for addiction appears to be entirely arbitrary (in fact, in Young’s book, she defines the cut-off as 70; for reasons that are unclear, M&G adopted a cut-off of 80). That is, it’s not like Young conducted extensive empirical analysis and determined that people with scores of X or above were functionally impaired in a way that people with scores below X weren’t; by all appearances, she simply picked numerically convenient cut-offs (20 – 39 is average; 40 – 69 indicates frequent problems; and 70+ basically means the internet is destroying your life). Any small change in the numerical cut-off would have translated into a large change in the proportion of people in M&G’s sample who met criteria for internet addiction, making the 1.2% figure seem even more arbitrary.

Third, M&G claim that the Internet Function Questionnaire they used asks respondents to indicate the proportion of time on the internet that they spend on each of several different activities. For example, given the question “How much of your time online do you spend on e-mail?”, your options would be 0-20%, 21-40%, and so on. You would presume that all the different activities should sum to 100%; after all, you can’t really spend 80% of your online time gaming, and then another 80% looking at porn–unless you’re either a very talented gamer, or have an interesting taste in “games”. Yet, when M&G report absolute numbers for the different activities in tables, they’re not given in percentages at all. Instead, one of the table captions indicates that the values are actually coded on a 6-point Likert scale ranging from “rarely/never” to “very frequently”. Hopefully you can see why this is a problem: if you claim (as M&G do) that your results reflect the relative proportion of time that people spend on different activities, you shouldn’t be allowing people to essentially say anything they like for each activity. Given that people with high IA scores report spending more time overall than they’d like online, is it any surprise if they also report spending more time on individual online activities? The claim that high-IA scorers spend “proportionately more” time on some activities just doesn’t seem to be true–at least, not based on the data M&G report. This might also explain how it could be that IA scores correlated positively with nearly all individual activities. That simply couldn’t be true for real proportions (if you spend proportionately more time on e-mail, you must be spending proportionately less time somewhere else), but it makes perfect sense if the response scale is actually anchored with vague terms like “rarely” and “frequently”.

Fourth, M&G consider two possibilities for the positive correlation between IAT and depression scores: (a) increased internet use causes depression, and (b) depression causes increased internet use. But there’s a third, and to my mind far more plausible, explanation: people who are depressed tend to have more negative self-perceptions, and are much more likely to endorse virtually any question that asks about dissatisfaction with one’s own behavior. Here are a couple of examples of questions on the IAT: “How often do you fear that life without the Internet would be boring, empty, and joyless?” “How often do you try to cut down the amount of time you spend on-line and fail?” Notice that there are really two components to these kinds of questions. One component is internet-specific: to what extent are people specifically concerned about their behavior online, versus in other domains? The other component is a general hedonic one, and has to do with how dissatisfied you are with stuff in general. Now, is there any doubt that, other things being equal, someone who’s depressed is going to be more likely to endorse an item that asks how often they fail at something? Or how often their life feels empty and joyless–irrespective of cause? No, of course not. Depressive people tend to ruminate and worry about all sorts of things. No doubt internet usage is one of those things, but that hardly makes it special or interesting. I’d be willing to bet money that if you created a Shoelace Tying Questionnaire that had questions like “How often do you worry about your ability to tie your shoelaces securely?” and “How often do you try to keep your shoelaces from coming undone and fail?”, you’d also get a positive correlation with BDI scores. Basically, depression and trait negative affect tend to correlate positively with virtually every measure that has a major evaluative component. That’s not news. To the contrary, given the types of questions on the IAT, it would have been astonishing if there wasn’t a robust positive correlation with depression.

Fifth, and related to the previous point, no evidence is ever actually provided that people with high IAT scores differ in their objective behavior from those with low scores. Remember, this is all based on self-report. And not just self-report, but vague self-report. As far as I can tell, M&G never asked respondents to estimate how much time they spent online in a given week. So it’s entirely possible that people who report spending too much time online don’t actually spend much more time online than anyone else; they just feel that way (again, possibly because of a generally negative disposition). There’s actually some support for this idea: A 2004 study that sought to validate the IAT psychometrically found only a .22 correlation between IAT scores and self-reported time spent online. Now, a .22 correlation is perfectly meaningful, and it suggests that people who feel they spend too much time online also estimate that they really do spend more time online (though, again, bias is a possibility here too). But it’s a much smaller correlation than the one between IAT scores and depression, which fits with the above idea that there may not be any real “link” between internet use and depression above and beyond the fact that depressed individuals are more likely to more negatively-worded items.

Finally, even if you ignore the above considerations, and decide to conclude that there is in fact a non-artifactual correlation between depression and internet use, there’s really no reason you would conclude that that’s a bad thing (which M&G hedge on, and many of the news articles haven’t hesitated to play up). It’s entirely plausible that the reason depressed individuals might spend more time online is because it’s an effective form of self-medication. If you’re someone who has trouble mustering up the energy to engage with the outside world, or someone who’s socially inhibited, online communities might provide you with a way to fulfill your social needs in a way that you would otherwise not have been able to. So it’s quite conceivable that heavy internet use makes people less depressed, not more; it’s just that the people who are more likely to use the internet heavily are more depressed to begin with. I’m not suggesting that this is in fact true (I find the artifactual explanation for the IAT-BDI correlation suggested above much more plausible), but just that the so-called “dark side” of the internet could actually be a very good thing.

In sum, what can we learn from M&G’s paper? Not that much. To be fair, I don’t necessarily think it’s a terrible paper; it has its limitations, but every paper does. The problem isn’t so much that the paper is bad; it’s that the findings it contains were blown entirely out of proportion, and twisted to support headlines (most of them involving the phrase “The Dark Side”) that they couldn’t possibly support. The internet may or may not cause depression (probably not), but you’re not going to get much traction on that question by polling a sample of internet respondents, using measures that have a conceptual overlap with depression, and defining groups based on arbitrary cut-offs. The jury remains open, of course, but these findings by themselves don’t really give us any reason to reconsider or try to change our online behavior.

ResearchBlogging.org
Morrison, C., & Gore, H. (2010). The Relationship between Excessive Internet Use and Depression: A Questionnaire-Based Study of 1,319 Young People and Adults Psychopathology, 43 (2), 121-126 DOI: 10.1159/000277001