a possible link between pesticides and ADHD

A forthcoming article in the journal Pediatrics that’s been getting a lot of press attention suggests that exposure to common pesticides may be associated with a substantially elevated risk of ADHD. More precisely, what the study found was that elevated urinary concentrations of organophosphate metabolites were associated with an increased likelihood of meeting criteria for an ADHD diagnosis. One of the nice things about this study is that the authors used archival data from the (very large) National Health and Nutrition Examination Survey (NHANES), so they were able to control for a relatively broad range of potential confounds (e.g., gender, age, SES, etc.). The primary finding is, of course, still based on observational data, so you wouldn’t necessarily want to conclude that exposure to pesticides causes ADHD. But it’s a finding that converges with previous work in animal models demonstrating that high exposure to organophosphate pesticides causes neurodevelopmental changes, so it’s by no means a crazy hypothesis.

I think it’s really pleasantly surprising to see how responsibly the popular press has covered this story (e.g., this, this, and this). Despite the obvious potential for alarmism, very few articles have led with a headline implying a causal link between pesticides and ADHD. They all say things like “associated with”, “tied to”, or “linked to”, which is exactly right. And many even explicitly mention the size of the effect in question–namely, approximately a 50% increase in risk of ADHD per 10-fold increase in concentration of pesticide metabolites. Given that most of the articles contain cautionary quotes from the study’s authors, I’m guessing the authors really emphasized the study’s limitations when dealing with the press, which is great. In any case, because the basic details of the study have already been amply described elsewhere (I thought this short CBS article was particularly good), I’ll just mention a few random thoughts here:

  • Often, epidemiological studies suffer from a gaping flaw in the sense that the more interesting causal story (and the one that prompts media attention) is far less plausible than other potential explanations (a nice example of this is the recent work on the social contagion of everything from obesity to loneliness). That doesn’t seem to be the case here. Obviously, there are plenty of other reasons you might get a correlation between pesticide metabolites and ADHD risk–for instance, ADHD is substantially heritable, so it could be that parents with a disposition to ADHD also have systematically different dietary habits (i.e., parental dispositions are a common cause of both urinary metabolites and ADHD status in children). But given the aforementioned experimental evidence, it’s not obvious that alternative explanations for the correlation are much more plausible than the causal story linking pesticide exposure to ADHD, so in that sense this is potentially a very important finding.
  • The use of a dichotomous dependent variable (i.e., children either meet criteria for ADHD or don’t; there are no shades of ADHD gray here) is a real problem in this kind of study, because it can make the resulting effects seem deceptively large. The intuitive way we think about the members of a category is to think in terms of prototypes, so that when you think about “ADHD” and “Not-ADHD” categories, you’re probably mentally representing an extremely hyperactive, inattentive child for the former, and a quiet, conscientious kid for the latter. If that’s your mental model, and someone comes along and tells you that pesticide exposure increases the risk of ADHD by 50%, you’re understandably going to freak out, because it’ll seem quite natural to interpret that as a statement that pesticides have a 50% chance of turning average kids into hyperactive ones. But that’s not the right way to think about it. In all likelihood, pesticides aren’t causing a small proportion of kids to go from perfectly average to completely hyperactive; instead, what’s probably happening is that the entire distribution is shifting over slightly. In other words, most kids who are exposed to pesticides (if we assume for the sake of argument that there really is a causal link) are becoming slightly more hyperactive and/or inattentive.
  • Put differently, what happens when you have a strict cut-off for diagnosis is that even small increases in underlying symptoms can result in a qualitative shift in category membership. If ADHD symptoms were measured on a continuous scale (which they actually probably were, before being dichotomized to make things simple and more consistent with previous work), these findings might have been reported as something like “a 10-fold increase in pesticide exposures is associated with a 2-point increase on a 30-point symptom scale,” which would have made it much clearer that, at worst, pesticides are only one of many other contributing factors to ADHD, and almost certainly not nearly as big a factor as some others. That’s not to say we shouldn’t be concerned if subsequent work supports a causal link, but just that we should retain perspective on what’s involved. No one’s suggesting that you’re going to feed your child an unwashed pear or two and end up with a prescription for Ritalin; the more accurate view would be that you might have a minority of kids who are already at risk for ADHD, and this would be just one more precipitating factor.
  • It’s also worth keeping in mind that the relatively large increase in ADHD risk is associated with a ten-fold increase in pesticide metabolites. As the authors note, that corresponds to the difference between the 25th and 75th percentiles in the sample. Although we don’t know exactly what that means in terms of real-world exposure to pesticides (because the authors didn’t have any data on grocery shopping or eating habits), it’s almost certainly a very sizable difference (I won’t get into the reasons why, except to note that the rank-order of pesticide metabolites must be relatively stable among children, or else there wouldn’t be any association with a temporally-extended phenotype like ADHD). So the point is, it’s probably not so easy to go from the 25th to the 75th percentile just by eating a few more fruits and vegetables here and there. So while it’s certainly advisable to try and eat better, and potentially to buy organic produce (if you can afford it), you shouldn’t assume that you can halve your child’s risk of ADHD simply by changing his or her diet slightly. These are, at the end of the day, small effects.
  • The authors report that fully 12% of children in this nationally representative sample met criteria for ADHD (mostly of the inattentive subtype). This, frankly, says a lot more about how silly the diagnostic criteria for ADHD are than about the state of the nation’s children. It’s frankly not plausible to suppose that 1 in 8 children really suffer from what is, in theory at least, a severe, potentially disabling disorder. I’m not trying to trivialize ADHD or argue that there’s no such thing, but simply to point out the dangers of medicalization. Once you’ve reached the point where 1 in every 8 people meet criteria for a serious disorder, the label is in danger of losing all meaning.

ResearchBlogging.orgBouchard, M., Bellinger, D., Wright, R., & Weisskopf, M. (2010). Attention-Deficit/Hyperactivity Disorder and Urinary Metabolites of Organophosphate Pesticides PEDIATRICS DOI: 10.1542/peds.2009-3058

elsewhere on the net

I’ve been swamped with work lately, so blogging has taken a backseat. I keep a text file on my desktop of interesting things I’d like to blog about; normally, about three-quarters of the links I paste into it go unblogged, but in the last couple of weeks it’s more like 98%. So here are some things I’ve found interesting recently, in no particular order:

It’s World Water Day 2010! Or at least it was a week ago, which is when I should have linked to these really moving photos.

Carl Zimmer has a typically brilliant (and beautifully illustrated) article in the New York Times about “Unseen Beasts, Then and Now“:

Somewhere in England, about 600 years ago, an artist sat down and tried to paint an elephant. There was just one problem: he had never seen one.

John Horgan writes a surprisingly bad guest blog post for Scientific American in which he basically accuses neuroscientists (not a neuroscientist or some neuroscientists, but all of us, collectively) of selling out by working with the US military. I’m guessing that the number of working neuroscientists who’ve ever received any sort of military funding is somewhere south of 10%, and is probably much smaller than the corresponding proportion in any number of other scientific disciplines, but why let data get in the way of a good anecdote or two. [via Peter Reiner]

Mark Liberman follows up his first critique of Louann Brizendine’s new “book” The Male Brain with second one, now that he’s actually got his hands on a copy. Verdict: the book is still terrible. Mark was also kind enough to answer my question about what the mysterious “sexual pursuit area” is. Apparently it’s the medial preoptic area. And the claim that this area governs sexual behavior in humans and is 2.5 times larger in males is, once again, based entirely on work in the rat.

Commuting sucks. Jonah Lehrer discusses evidence from happiness studies (by way of David Brooks) suggesting that most people would be much happier living in a smaller house close to work than a larger house that requires a lengthy commute:

According to the calculations of Frey and Stutzer, a person with a one-hour commute has to earn 40 percent more money to be as satisfied with life as someone who walks to the office.

I’ve taken these findings to heart, and whenever my wife and I move now, we prioritize location over space. We’re currently paying through the nose to live in a 750 square foot apartment near downtown Boulder. It’s about half the size of our old place in St. Louis, but it’s close to everything, including our work, and we love living here.

The modern human brain is much bigger than it used to be, but we didn’t get that way overnight. John Hawks disputes Colin Blakemore’s claim that “the human brain got bigger by accident and not through evolution“.

Sanjay Srivastava leans (or maybe used to lean) toward the permissive side; Andrew Gelman is skeptical. Attitudes toward causal modeling of correlational (and even some experimental) data differ widely. There’s been a flurry of recent work suggesting that causal modeling techniques like mediation analysis and SEM suffer from a number of serious and underappreciated problems, and after reading this paper by Bullock, Green and Ha, I guess I incline to agree.

A landmark ruling by a New York judge yesterday has the potential to invalidate existing patents on genes, which currently cover about 20% of the human genome in some form. Daniel MacArthur has an excellent summary.

elsewhere on the internets…

The good people over at OKCupid, the best dating site on Earth (their words, not mine! I’m happily married!), just released a new slew of data on their OKTrends blog. Apparently men like women with smiley, flirty profile photos, and women like dismissive, unsmiling men. It’s pretty neat stuff, and definitely worth a read. Mating rituals aside, thuough, what I really like to think about whenever I see a new OKTrends post is how many people I’d be willing to kill to get my hands on their data.

Genetic Future covers the emergence of Counsyl, a new player in the field of personal genomics. Unlike existing outfits like 23andme and deCODEme.com, Counsyl focuses on rare Mendelian disorders, with an eye to helping prospective parents evaluate their genetic liabilities. What’s really interesting about Counsyl is its business model; if you have health insurance provided by Aetna or Blue Cross, you could potentially get a free test. Of course, the catch is that Aetna or Blue Cross get access to your results. In theory, this shouldn’t matter, since health insurers can’t use genetic information as grounds for discrimination. But then, on paper, employers can’t use race, gender, or sexual orientation as grounds for discrimination either, and yet we know it’s easier to get hired if your name is John than Jamal. That said, I’d probably go ahead and take Aetna up on its generous offer, except that my wife and I have no plans for kids, and the Counsyl test looks like it stays away from the garden-variety SNPs the other services cover…

The UK has banned the export of dowsing rods. In 2010! This would be kind of funny if not for the fact that dozens if not hundreds of Iraqis have probably died horrible deaths as a result of the Iraqi police force trying to detect roadside bombs using magic. [via Why Evolution is True].

Over at Freakonomics, regular contributor Ryan Hagen interviews psychologist, magician, and author Richard Wiseman, who just published a new empirically-based self-help book (can such a thing exist?). I haven’t read the book, but the interview is pretty good. Favorite quote:

What would I want to do? I quite like the idea of the random giving of animals. There’s a study where they took two groups of people and randomly gave people in one group a dog. But I’d quite like to replicate that with a much wider range of animals — including those that should be in zoos. I like the idea of signing up for a study, and you get home and find you’ve got to look after a wolf “¦ .

On a professional note, Professor in Training has a really great two part series (1, 2) on what new tenure-track faculty need to know before starting the job. I’ve placed both posts inside Google Reader’s golden-starred vault, and fully expect to come back to them next Fall when I’m on the job market. Which means if you’re reading this and you’re thinking of hiring me, be warned: I will demand that a life-size bobble-head doll of Hans Eysenck be installed in my office, and thanks to PiT, I do now have the awesome negotiating powers needed to make it happen.

a well-written mainstream article on fMRI?!

Craig Bennett, of prefrontal.org and dead salmon fame, links to a really great Science News article on the promises and pitfalls of fMRI. As Bennett points out, the real gem of the article is the “quote of the week” from Nikos Logethetis (which I won’t spoil for you here; you’ll have to do just a little more work to get to it). But the article is full of many other insightful quotes from fMRI researchers, and manages to succinctly and accurately describe a number of recent controversies in the fMRI literature without sacrificing too much detail. Usually when I come across a mainstream article on fMRI, I pre-emptively slap the screen a few times before I start reading, because I know I’m about to get angry. Well, I did that this time too, so my hand hurts per usual, but at least this time I feel pretty good about it. Kudos to Laura Sanders for writing one of the best non-technical accounts I’ve seen of the current state of fMRI research (and that, unlike a number of other articles in this vein, actually ends on a balanced and optimistic note).

younger and wiser?

Peer reviewers get worse as they age, not better. That’s the conclusion drawn by a study discussed in the latest issue of Nature. The study isn’t published yet, and it’s based on analysis of 1,400 reviews in just one biomedical journal (The Annals of Emergency Medicine), but there’s no obvious reason why these findings shouldn’t generalize to other areas of research.From the article:

The most surprising result, however, was how individual reviewers’ scores changed over time: 93% of them went down, which was balanced by fresh young reviewers coming on board and keeping the average score up. The average decline was 0.04 points per year.

That 0.04/year is, I presume, on a scale of 5,  and the quality of reviews was rated by the editors of the journal. This turns the dogma of experience on its head, in that it suggests editors are better off asking more junior academics for reviews (though whether this data actually affects editorial policy remains to be seen). Of course, the key question–and one that unfortunately isn’t answered in the study–is why more senior academics give worse reviews. It’s unlikely that experience makes you a poorer scientist, so the most likely explanation is that that “older reviewers tend to cut corners,” as the article puts it. Anecdotally, I’ve noticed this myself in the dozen or so reviews I’ve completed; my reviews often tend to be relatively long compared to those of the other reviewers, most of whom are presumably more senior. I imagine length of review is (very) loosely used as a proxy for quality of review by editors, since a longer review will generally be more comprehensive. But this probably says more about constraints on reviewers’ time than anything else. I don’t have grants to write and committees to sit on; my job consists largely of writing papers, collecting data, and playing the occasional video game keeping up with the literature.

Aside from time constraints, senior researchers probably also have less riding on a review than junior researchers do. A superficial review from an established researcher is unlikely to affect one’s standing in the field, but as someone with no reputation to speak of, I usually feel a modicum of pressure to do at least a passable job reviewing a paper. Not that reviews make a big difference (they are, after all, anonymous to all but the editors, and occasionally, the authors), but at this point in my career they seem like something of an opportunity, whereas I’m sure twenty or thirty years from now they’ll feel much more like an obligation.

Anyway, that’s all idle speculation. The real highlight of the Nature article is actually this gem:

Others are not so convinced that older reviewers aren’t wiser. “This is a quantitative review, which is fine, but maybe a qualitative study would show something different,” says Paul Hébert, editor of the Canadian Medical Association Journal in Ottawa. A thorough review might score highly on the Annals scale, whereas a less thorough but more insightful review might not, he says. “When you’re young you spend more time on it and write better reports. But I don’t want a young person on a panel when making a multi-million-dollar decision.”

I think the second quote is on the verge of being reasonable (though DrugMonkey disagrees), but the first is, frankly, silly. Qualitative studies can show almost anything you want them to show; I thought that was precisely why we do quantitative studies…

[h/t: DrugMonkey]