The latest issue of the Journal of Neuroscience contains an interesting article by Ecker et al in which the authors attempted to classify people with autism spectrum disorder (ASD) and health controls based on their brain anatomy, and report achieving “a sensitivity and specificity of up to 90% and 80%, respectively.” Before unpacking what that means, and why you probably shouldn’t get too excited (about the clinical implications, at any rate; the science is pretty cool), here’s a snippet from the decidedly optimistic press release that accompanied the study:
“Scientists funded by the Medical Research Council (MRC) have developed a pioneering new method of diagnosing autism in adults. For the first time, a quick brain scan that takes just 15 minutes can identify adults with autism with over 90% accuracy. The method could lead to the screening for autism spectrum disorders in children in the future.”
If you think this sounds too good to be true, that’s because it is. Carl Heneghan explains why in an excellent article in the Guardian:
How the brain scans results are portrayed is one of the simplest mistakes in interpreting diagnostic test accuracy to make. What has happened is, the sensitivity has been taken to be the positive predictive value, which is what you want to know: if I have a positive test do I have the disease? Not, if I have the disease, do I have a positive test? It would help if the results included a measure called the likelihood ratio (LR), which is the likelihood that a given test result would be expected in a patient with the target disorder compared to the likelihood that the same result would be expected in a patient without that disorder. In this case the LR is 4.5. We’ve put up an article if you want to know more on how to calculate the LR.
In the general population the prevalence of autism is 1 in 100; the actual chances of having the disease are 4.5 times more likely given a positive test. This gives a positive predictive value of 4.5%; about 5 in every 100 with a positive test would have autism.
For those still feeling confused and not convinced, let’s think of 10,000 children. Of these 100 (1%) will have autism, 90 of these 100 would have a positive test, 10 are missed as they have a negative test: there’s the 90% reported accuracy by the media.
But what about the 9,900 who don’t have the disease? 7,920 of these will test negative (the specificity3 in the Ecker paper is 80%). But, the real worry though, is the numbers without the disease who test positive. This will be substantial: 1,980 of the 9,900 without the disease. This is what happens at very low prevalences, the numbers falsely misdiagnosed rockets. Alarmingly, of the 2,070 with a positive test, only 90 will have the disease, which is roughly 4.5%.
In other words, if you screened everyone in the population for autism, and assume the best about the classifier reported in the JNeuro article (e.g., that the sample of 20 ASD participants they used is perfectly representative of the broader ASD population, which seems unlikely), only about 1 in 20 people who receive a positive diagnosis would actually deserve one.
Ecker et al object to this characterization, and reply to Heneghan in the comments (through the MRC PR office):
Our test was never designed to screen the entire population of the UK. This is simply not practical in terms of costs and effort, and besides totally unjustified- why would we screen everybody in the UK for autism if there is no evidence whatsoever that an individual is affected?. The same case applies to other diagnostic tests. Not every single individual in the UK is tested for HIV. Clearly this would be too costly and unnecessary. However, in the group of individuals that are test for the virus, we can be very confident that if the test is positive that means a patient is infected. The same goes for our approach.
Essentially, the argument is that, since people would presumably be sent for an MRI scan because they were already under consideration for an ASD diagnosis, and not at random, the false positive rate would in fact be much lower than 95%, and closer to the 20% reported in the article.
One response to this reply–which is in fact Heneghan’s response in the comments–is to point out that the pre-test probability of ASD would need to be pretty high already in order for the classifier to add much. For instance, even if fully 30% of people who were sent for a scan actually had ASD, the posterior probability of ASD given a positive result would still be only 66% (Heneghan’s numbers, which I haven’t checked). Heneghan nicely contrasts these results with the standard for HIV testing, which “reports sensitivity of 99.7% and specificity of 98.5% for enzyme immunoassay.” Clearly, we have a long way to go before doctors can order MRI-based tests for ASD and feel reasonably confident that a positive result is sufficient grounds for an ASD diagnosis.
Setting Heneghan’s concerns about base rates aside, there’s a more general issue that he doesn’t touch on. It’s one that’s not specific to this particular study, and applies to nearly all studies that attempt to develop “biomarkers” for existing disorders. The problem is that the sensitivity and specificity values that people report for their new diagnostic procedure in these types of studies generally aren’t the true parameters of the procedure. Rather, they’re the sensitivity and specificity under the assumption that the diagnostic procedures used to classify patients and controls in the first place are themselves correct. In other words, in order to believe the results, you have to assume that the researchers correctly classified the subjects into patient and control groups using other procedures. In cases where the gold standard test used to make the initial classification is known to have near 100% sensitivity and specificity (e.g., for the aforementioned HIV tests), one can reasonably ignore this concern. But when we’re talking about mental health disorders, where diagnoses are fuzzy and borderline cases abound, it’s very likely that the “gold standard” isn’t really all that great to begin with.
Concretely, studies that attempt to develop biomarkers for mental health disorders face two substantial problems. One is that it’s extremely unlikely that the clinical diagnoses are ever perfect; after all, if they were perfect, there’d be little point in trying to develop other diagnostic procedures! In this particular case, the authors selected subjects into the ASD group based on standard clinical instruments and structured interviews. I don’t know that there are many clinicians who’d claim with a straight face that the current diagnostic criteria for ASD (and there are multiple sets to choose from!) are perfect. From my limited knowledge, the criteria for ASD seem to be even more controversial than those for most other mental health disorders (which is saying something, if you’ve been following the ongoing DSM-V saga). So really, the accuracy of the classifier in the present study, even if you put the best face on it and ignore the base rate issue Heneghan brings up, is undoubtedly south of the 90% sensitivity / 80% specificity the authors report. How much south, we just don’t know, because we don’t really have any independent, objective way to determine who “really” should get an ASD diagnosis and who shouldn’t (assuming you think it makes sense to make that kind of dichotomous distinction at all). But 90% accuracy is probably a pipe dream, if for no other reason than it’s hard to imagine that level of consensus about autism spectrum diagnoses.
The second problem is that, because the researchers are using the MRI-based classifier to predict the clinician-based diagnosis, it simply isn’t possible for the former to exceed the accuracy of the latter. That bears repeating, because it’s important: no matter how good the MRI-based classifier is, it can only be as good as the procedures used to make the original diagnosis, and no better. It cannot, by definition, make diagnoses that are any more accurate than the clinicians who screened the participants in the authors’ ASD sample. So when you see the press release say this:
For the first time, a quick brain scan that takes just 15 minutes can identify adults with autism with over 90% accuracy.
You should really read it as this:
The method relies on structural (MRI) brain scans and has an accuracy rate approaching that of conventional clinical diagnosis.
That’s not quite as exciting, obviously, but it’s more accurate.
To be fair, there’s something of a catch-22 here, in that the authors didn’t really have a choice about whether or not to diagnose the ASD group using conventional criteria. If they hadn’t, reviewers and other researchers would have complained that we can’t tell if the ASD group is really an ASD group, because they authors used non-standard criteria. Under the circumstances, they did the only thing they could do. But that doesn’t change the fact that it’s misleading to intimate, as the press release does, that the new procedure might be any better than the old ones. It can’t be, by definition.
Ultimately, if we want to develop brain-based diagnostic tools that are more accurate than conventional clinical diagnoses, we’re going to need to show that these tools are capable of predicting meaningful outcomes that clinician diagnoses can’t. This isn’t an impossible task, but it’s a very difficult one. One approach you could take, for instance, would be to compare the ability of clinician diagnosis and MRI-based diagnosis to predict functional outcomes among subjects at a later point in time. If you could show that MRI-based classification of subjects at an early age was a stronger predictor of receiving an ASD diagnosis later in life than conventional criteria, that would make a really strong case for using the former approach in the real world. Short of that type of demonstration though, the only reason I can imagine wanting to use a procedure that was developed by trying to duplicate the results of an existing procedure is in the event that the new procedure is substantially cheaper or more efficient than the old one. Meaning, it would be reasonable enough to say “well, look, we don’t do quite as well with this approach as we do with a full clinical evaluation, but at least this new approach costs much less.” Unfortunately, that’s not really true in this case, since the price of even a short MRI scan is generally going to outweigh that of a comprehensive evaluation by a psychiatrist or psychotherapist. And while it could theoretically be much faster to get an MRI scan than an appointment with a mental health professional, I suspect that that’s not generally going to be true in practice either.
Having said all that, I hasten to note that all this is really a critique of the MRC press release and subsequently lousy science reporting, and not of the science itself. I actually think the science itself is very cool (but the Neuroskeptic just wrote a great rundown of the methods and results, so there’s not much point in me describing them here). People have been doing really interesting work with pattern-based classifiers for several years now in the neuroimaging literature, but relatively few studies have applied this kind of technique to try and discriminate between different groups of individuals in a clinical setting. While I’m not really optimistic that the technique the authors introduce in this paper is going to change the way diagnosis happens any time soon (or at least, I’d argue that it shouldn’t), there’s no question that the general approach will be an important piece of future efforts to improve clinical diagnoses by integrating biological data with existing approaches. But that’s not going to happen overnight, and in the meantime, I think it’s pretty irresponsible of the MRC to be issuing press releases claiming that its researchers can diagnose autism in adults with 90% accuracy.
Ecker C, Marquand A, Mourão-Miranda J, Johnston P, Daly EM, Brammer MJ, Maltezos S, Murphy CM, Robertson D, Williams SC, & Murphy DG (2010). Describing the brain in autism in five dimensions–magnetic resonance imaging-assisted diagnosis of autism spectrum disorder using a multiparameter classification approach. The Journal of neuroscience : the official journal of the Society for Neuroscience, 30 (32), 10612-23 PMID: 20702694
Here’s another thing that bothers me with biomarkers: suppose an MRI index is used to diagnose someone with a clinical disorder, but I don’t manifest behavioral symptoms of that disorder — what, then, does it even mean to say I “have” the disorder? Whether or not psychological disorders such as ASD have neurological causes, they are still *psychological* disorders; that is, what is chiefly problematic about having them is they mess up behavioral and psychological functioning. Using a biomarker instead of a behavioral index, by definition, makes the diagnosis LESS direct.
Exciting as these things are, I think people often lose track of the fact that psychology is fundamentally about behavioral and cognitive function; if you want to understand things like psychological disorders, there is no way getting around having to understand the function on its own terms; neural substrates improve our understanding of function, but they do not replace it.
Yeah, I think that’s exactly right. No matter how well you feel physically, you’d almost certainly take a positive HIV test very seriously, and seek further medical attention. Whereas if someone tells you you have the brain of a depressed person, but you’re a perpetually upbeat, optimistic person, the appropriate response would probably be to write the result off as a false positive and ignore it. The subjective and behavioral symptoms have primacy when it comes to mental health disorders.
Nice work. We may develop teleportation and cold fusion in my lifetime, but if we achieve accurate science reporting about those developments in the same span, I’ll be surprised.
one possible advantage of the MRI test is that is it more objective. The current clinical assessments for autism involve a fair amount of clinical judgement, and watching a video of the same interview, two clinicians may only agree on 80% of the items scored (less agreement is usually taken to mean one of the clinicians isn’t properly trained). The MRI analysis could presumably be run automatically and is going to give the same number every time you run it. So I guess it has higher repeatability than the clinical interviews.
I don’t know that I’d say it’s any more objective, because, as I pointed out above, the classifier is actually trained to predict clinician-based diagnosis. Ultimately, the classifier is bound to perform no better than (and in practice, somewhat worse) than the clinicians who made the diagnoses in this study. So it’s only objective if you think the clinical diagnoses in this case were objective. But since the diagnoses were made using standard screening measures, that seems like a stretch.
It’s certainly true that MRI-based diagnosis may be more consistent, in the sense that it’ll always give you the same answer whereas a clinician may not. But that isn’t necessarily a good thing in and of itself, since you can have a highly consistent method that’s consistently wrong (if you want to be perfectly consistent, all you need to do is say that everyone’s always autistic!). Consistency is secondary to accuracy when it comes to clinical diagnosis; wouldn’t you rather see a clinician who’s right 90% of the time but doesn’t always give the same answer than one who’s right 50% of the time but always errs in exactly the same way?
Hi Tal
Very nicely put. I blogged about the same study from a slightly different angle http://goo.gl/b/1xTi but agree with you 100% that there’s a fundamental limitation in trying to develop new diagnostic methods by seeing how well they agree with the old methods we’re trying to replace.
Ultimately, the problem is that autism is itself a subjective category. Just to be clear, I’m not saying that it doesn’t exist. But the diagnostic boundaries are fairly arbitrary (as evidenced by the fact that they keep changing). So I don’t believe it’ll ever be possible to develop an objective test to distinguish between autism / not autism.
Where I think these kinds of biomarkers may be useful is in identifying different subgroups within autism. Two kids who present as superficially quite similar aged three may have very different developmental trajectories and may respond very differently to different interventions. It may be that we can use information about their underlying biological differences to predict how they will develop and what support will be most appropriate.