Apparently, many scientists have rather strong feelings about data sharing mandates. In the wake of PLOS’s recent announcement–which says that, effective now, all papers published in PLOS journals must deposit their data in a publicly accessible location–a veritable gaggle of scientists have taken to their blogs to voice their outrage and/or support for the policy. The nays have posts like DrugMonkey’s complaint that the inmates are running the asylum at PLOS (more choice posts are here, here, here, and here); the yays have Edmund Hart telling the nays to get over themselves and share their data (more posts here, here, and here). While I’m a bit late to the party (mostly because I’ve been traveling and otherwise indisposed), I guess I’ll go ahead and throw my hat into the ring in support of data sharing mandates. For a number of reasons outlined below, I think time will show the anti-PLOS folks to very clearly be on the wrong side of this issue.
Mandatory public deposition is like, totally way better than a “share-upon-request” approach
You might think that proactive data deposition has little incremental utility over a philosophy of sharing one’s data upon request, since emails are these wordy little things that only take a few minutes of a data-seeker’s time to write. But it’s not just the time and effort that matter. It’s also the psychology and technology. Psychology, because if you don’t know the person on the other end, or if the data is potentially useful but not essential to you, or if you’re the agreeable sort who doesn’t like to bother other people, it’s very easy to just say, “nah, I’ll just go do something else”. Scientists are busy people. If a dataset is a click away, many people will be happy to download that dataset and play with it who wouldn’t feel comfortable emailing the author to ask for it. Technology, because data that isn’t publicly available is data that isn’t publicly indexed. It’s all well and good to say that if someone really wants a dataset, they can email you to ask for it, but if someone doesn’t know about your dataset in the first place–because it isn’t in the first three pages of Google results–they’re going to have a hard time asking.
People don’t actually share on request
Much of the criticism of the PLoS data sharing policy rests on the notion that the policy is unnecessary, because in practice most journals already mandate that authors must share their data upon request. One point that defenders of the PLOS mandate haven’t stressed enough is that such “soft” mandates are largely meaningless. Empirical studies have repeatedly demonstrated  that it’s actually very difficult  to get authors to share their data upon request —even when they’re obligated to do so by the contractual agreement they’ve signed with a publisher. And when researchers do fulfill data sharing requests, they often take inordinately long to do so, and the data often don’t line up properly with what was reported in the paper (as the PLOS editors noted in their explanation for introducing the policy), or reveal potentially serious errors.
Personally, I have to confess that I often haven’t fulfilled other researchers’ requests for my data–and in at least two cases, I never even responded to the request. These failures to share didn’t reflect my desire to hide anything; they occurred largely because I knew it would be a lot of work, and/or the data were no longer readily accessible to me, and/or I was too busy to take care of the request right when it came in. I think I’m sufficiently aware of my own character flaws to know that good intentions are no match for time pressure and divided attention–and that’s precisely why I’d rather submit my work to journals that force me to do the tedious curation work up front, when I have a strong incentive to do it, rather than later, when I don’t.
Comprehensive evaluation requires access to the data
It’s hard to escape the feeling that some of the push-back against the policy is actually rooted in the fear that other researchers will find mistakes in one’s work by going through one’s data. In some cases, this fear is made explicit. For example, DrugMonkey suggested that:
There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted!
This rather dire premonition prompted a commenter to ask if it’s possible that DM might ever be wrong about what his data means–necessitating other pairs of eyes and/or opinions. DM’s response was, in essence, “No.”. But clearly, this is wishful thinking: we have plenty of reasons to think that everyone in science–even the luminaries among us–make mistakes all the time. Science is hard. In the fields I’m most familiar with, I rarely read a paper that I don’t feel has some serious flaws–even though nearly all of these papers were written by people who have, in DM’s words, “been at this for a while”. By the same token, I’m certain that other people read each of my papers and feel exactly the same way. Of course, it’s not pleasant to confront our mistakes by putting everything out into the open, and I don’t doubt that one consequence of sharing data proactively is that error-finding will indeed become much more common. At least initially (i.e., until we develop an appreciation for the true rate of error in the average dataset, and become more tolerant of minor problems), this will probably cause everyone some discomfort. But temporary discomfort surely isn’t a good excuse to continue to support practices that clearly impede scientific progress.
Part of the problem, I suspect, is that scientists have collectively internalized as acceptable many practices that are on some level clearly not good for the community as a whole. To take just one example, it’s an open secret in biomedical science that so-called “representative figures” (of spiking neurons, Western blots, or whatever else you like) are rarely truly representative. Frequently, they’re actually among the best examples the authors of a paper were able to find. The communal wink-and-shake agreement to ignore this kind of problem is deeply problematic, in that it likely allows many claims to go unchallenged that are actually not strongly supported by the data. In a world where other researchers could easily go through my dataset and show that the “representative” raster plot I presented in Figure 2C was actually the best case rather than the norm, I would probably have to be more careful about making that kind of claim up front–and someone else might not waste a lot of their time chasing results that can’t possibly be as good as my figures make them look.
The Data are a part of the Methods
If you still don’t find this convincing, consider that one could easily have applied nearly all of the arguments people having been making in the blogosphere these past two weeks to that dastardly scientific timesink that is the common Methods sections. Imagine that we lived in a culture where scientists always reported their Results telegraphically–that is, with the brevity of a typical Nature or Science paper, but without the accompanying novel’s worth of Supplementary Methods. Then, when someone first suggested that it might perhaps be a good idea to introduce a separate section that describes in dry, technical language how authors actually produced all those exciting results, we would presumably see many people in the community saying something like the following:
Why should I bother to tell you in excruciating detail what software, reagents, and stimuli I used in my study? The vast majority of readers will never try to directly replicate my experiment, and those who do want to can just email me to get the information they need–which of course I’m always happy to provide in a timely and completely disinterested fashion. Asking me to proactively lay out every little methodological step I took is really unreasonable; it would take a very long time to write a clear “Methods” section of the kind you propose, and the benefits seem very dubious. I mean, the only thing that will happen if I adopt this new policy is that half of my competitors will start going through this new section with a fine-toothed comb in order to find problems, and the other half will now be able to scoop me by repeating the exact procedures I used before I have a chance to follow them up myself! And for what? Why do I need to tell everyone exactly what I did? I’m an expert with many years of experience in this field! I know what I’m doing, and I don’t appreciate your casting aspersions on my work and implying that my conclusions might not always be 100% sound!
As far as I can see, there isn’t any qualitative difference between reporting detailed Methods and providing comprehensive Data. In point of fact, many decisions about which methods one should use depend entirely on the nature of the data, so it’s often actually impossible to evaluate the methodological choices the authors made without seeing their data. If DrugMonkey et al think it’s crazy for one researcher to want access to another researcher’s data in order to determine whether the distribution of some variable looks normal, they should also think it’s crazy for researchers to have to report their reasoning for choosing a particular transformation in the first place. Or for using a particular reagent. Or animal strain. Or learning algorithm, or… you get the idea. But as Bjorn Brembs succinctly put it, in the digital age, this is silly: for all intents and purposes, there’s no longer any difference between text and data.
The data are funded by the taxpayers, and (in some sense) belong to the taxpayers
People vary widely in the extent to which they feel the public deserves to have access to the products of the work it funds. I don’t think I hold a particularly extreme position in this regard, in the sense that I don’t think the mere fact that someone’s effort is funded by the public automatically means any of their products should be publicly available for anyone’s perusal or use. However, when we’re talking about scientific data–where the explicit rationale for funding the work is to produce new generalizable knowledge, and where the marginal cost of replicating digital data is close to zero–I really don’t see any reason not to push very strongly to force scientists to share their data. I’m sympathetic to claims about scooping and credit assignment, but as a number of other folks have pointed out in comment threads, these are fundamentally arguments in favor of better credit assignment, and not arguments against sharing data. The fear some people have of being scooped is not sufficient justification for impeding our collective scientific progress.
It’s also worth noting that, in principle, PLOS’s new data sharing policy shouldn’t actually make it any easier for someone else to scoop you. Remember that under PLOS’s current data sharing mandate–as well as the equivalent policies at most other scientific journals–authors are already required to provide their data to anyone else upon request. Critics who argue that the new public archiving mandate opens the door to being scooped are in effect admitting that the old mandate to share upon request doesn’t work, because in theory there already shouldn’t really be anything preventing me from scooping you with your data simply by asking you for it (other than social norms–but then, the people who are actively out to usurp others’ ideas are the least likely to abide by those norms anyway). It’s striking to see how many of the posts defending the “share-upon-request” approach have no compunction in saying that they’re currently only willing to share their data after determining what the person on the other end wants to use it for–in clear violation of most journals’ existing policy.
It’s really not that hard
Organizing one’s data or code in a form minimally suitable for public consumption isn’t much fun. I do it fairly regularly; I know it sucks. It takes some time out of your day, and requires you to allocate resources to the problem that could otherwise be directed elsewhere. That said, a lot of the posts complaining about how much effort the new policy requires seem absurdly overwrought. There seems to be a widespread belief–which, as far as I can tell, isn’t supported by a careful reading of the actual PLOS policy–that there’s some incredibly strict standard that datasets have to live up to before pulic release. I don’t really understand where this concern comes from. Personally, I spend much of my time analyzing data other people have collected. I’ve worked with many other people’s data, and rarely is it in exactly the form I would like. Often times it’s not even in the ballpark of what I’d like. And I’ve had to invest a considerable amount of my time understanding what columns and rows mean, and scrounging for morsels of (poor) documentation. My working assumption when I do this–and, I think, most other people’s–is that the onus is on me to expend some effort figuring out what’s in a dataset I wish to use, and not on the author to release that dataset in a form that a completely naive person could understand without any effort. Of course it would be nice if everyone put their data up on the web in a form that maximized accessibility, but it certainly isn’t expected*. In asking authors to deposit their data publicly, PLOS isn’t asserting that there’s a specific format or standard that all data must meet; they’re just saying data must meet accepted norms. Since those norms depend on one’s field, it stands to reason that expectations will be lower for a 10-TB fMRI dataset than for an 800-row spreadsheet of behavioral data.
There are some valid concerns, but…
I don’t want to sound too Pollyannaish about all this. I’m not suggesting that the PLOS policy is perfect, or that issues won’t arise in the course of its implementation and enforcement. It’s very clear that there are some domains in which data sharing is a hassle, and I sympathize with the people who’ve pointed out that it’s not really clear what “all” the data means–is it the raw data, which aren’t likely to be very useful to anyone, or the post-processed data, which may be too close to the results reported in the paper? But such domain- or case-specific concerns are grossly outweighed by the very general observation that it’s often impossible to evaluate previous findings adequately, or to build a truly replicable science, if you don’t have access to other scientists’ data. There’s no doubt that edge cases will arise in the course of enforcing the new policy. But they’ll be dealt with on a case-by-case basis, exactly as the PLOS policy indicates. In the meantime, our default assumption should be that editors at PLOS–who are, after all, also working scientists–will behave reasonably, since they face many of the same considerations in their own research. When a researcher tells an editor that she doesn’t have anywhere to put the 50 TB of raw data for her imaging study, I expect that that editor will typically respond by saying, “fine, but surely you can drag and drop a directory full of the first- and second-level beta images, along with a basic description, into NeuroVault, right?”, and not “Whut!? No raw DICOM images, no publication!”
As for the people who worry that by sharing their data, they’ll be giving away a competitive advantage… to be honest, I think many of these folks are mistaken about the dire consequences that would ensue if they shared their data publicly. I suspect that many of the researchers in question would be pleasantly surprised at the benefits of data sharing (increased citation rates, new offers of collaboration, etc.) Still, it’s clear enough that some of the people who’ve done very well for themselves in the current scientific system–typically by leveraging some incredibly difficult-to-acquire dataset into a cottage industry of derivative studies–would indeed do much less well in a world where open data sharing was mandatory. What I fail to see, though, is why PLOS, or the scientific community as a whole, should care very much about this latter group’s concerns. As far as I can tell, PLOS’s new policy is a significant net positive for the scientific community as a whole, even if it hurts one segment of that community in the short term. For the moment, scientists who harbor proprietary attitudes towards their data can vote with their feet by submitting their papers somewhere other than PLOS. Contrary to the dire premonitions floating around, I very much doubt any potential drop in submissions is going to deliver a terminal blow to PLOS (and the upside is that the articles that do get published in PLOS will arguably be of higher quality). In the medium-to-long term, I suspect that cultural norms surrounding who gets credit for acquiring and sharing data vs. analyzing and reporting new findings based on those data are are going to undergo a sea change–to the point where in the not-too-distant future, the scoopophobia that currently drives many people to privately hoard their data is a complete non-factor. At that point, it’ll be seen as just plain common sense that if you want your scientific assertions to be taken seriously, you need to make the data used to support those assertions available for public scrutiny, re-analysis, and re-use.
*Â As a case in point, just yesterday I came across a publicly accessible dataset I really wanted to use, but that was in SPSS format. I don’t own a copy of SPSS, so I spent about an hour trying to get various third-party libraries to extract the data appropriately, without any luck. So eventually I sent the file to a colleague who was helpful enough to convert it. My first thought when I received the tab-delimited file in my mailbox this morning was not “ugh, I can’t believe they released the file in SPSS”, it was “how amazing is it that I can download this gigantic dataset acquired half the world away instantly, and with just one minor hiccup, be able to test a novel hypothesis in a high-powered way without needing to spend months of time collecting data?”
I wrote one of the so-called anti-PLOS posts. It was addressing what you recognize to be valid concerns, in particular how the positives and negatives of required archiving are asymmetrically distributed among scientists, in a way that might exacerbate existing inequities.
I hope you’re right, that *time* will show that I’m on the wrong side of the issue. The circumstances should change over time. At the moment, given the existing rewards structure in science, I’m on the right side, but in time, I hope that won’t be the case.
Terry, I’m not in a position to evaluate the incentives as they apply specifically to you and your lab, but my point was simply that from PLOS’s point it’s largely irrelevant if there’s a small subset of researchers who won’t submit to PLOS any more as a result of the new policy. The new policy is clearly in the long-term best interest of the community as a whole, and as long as there isn’t a very large short-term drop-off in submissions (which could happen, but seems doubtful), I don’t see why PLOS would have any reason to reverse course.
“The data are funded by the taxpayers, and (in some sense) belong to the taxpayers.”
Not sure what the “in some sense” qualifier is intended to convey here. In any case, who owns my research data is clear, and it is NOT taxpayers:
http://neurodojo.blogspot.com/2014/03/who-owns-data.html
The “in some sense” basically means “it’s complicated”. Like I said, I don’t have a very extreme position on this; I recognize that there are many stakeholders with some kind of claim to some kind of ownership of the data. What’s at issue here isn’t ownership per se, though, it’s access. The fact that universities (claim to) own the IP over data doesn’t mean you as the researcher don’t have the right to share it, that the taxpayers don’t have the right to access it, or that a journal can’t make public deposition a condition of publication. But I’m happy to concede that this is the weakest argument of the ones I discussed–largely because I view it as a moral imperative rather than a contractual obligation.
Dear Zen,
Quick comment on this issue, which I believe in my particular field (human neuroimaging) and country (Canada) is indeed crystal clear. I do research with human data. The data is owned by the participants. Not the University, not the funding agencies, not the researcher, but to the people who dedicate their time and sometime well-being (e.g. in drug trials with unclear costs-benefits) to participate in a study. At least that’s what I was told as part of my mandatory training in ethics, as a health researcher. Research in human clinical neuroscience aims at improving public health, and implementing policies that seriously threatens this goal, such as no or highly restrictive data sharing, is as far as I can tell plain unethical.
Now, the situation may be different with, say, data collected in flies. But I note that with software, where my University definitely owns part of the IP, many researchers, including myself, release their code publicly under very liberal open source licenses. All in all I believe the resistance to data sharing does not have clear foundations besides the willingness of researchers to embrace more open practices. I trace this resistance to simple resistance to change, lack of incentive (more work required, no obvious reward) and some desire to maintain a competitive edge, which I think is more a fantasy than reality in the vast majority of cases. Cheers,
Pierre
Thanks for writing all this out, I agree with everything you’ve said. One additional strong argument against ‘share upon request’ is that it only works in the year or three after publication – after that there’s a high probability that you can’t get hold of the author or that the dataset is lost (http://www.nature.com/news/scientists-losing-data-at-a-rapid-rate-1.14416)
Nice post Tal. I’m a bit confused at the scooping concern with respect to posting data. It seems that, once a paper is accepted for publication, and the authors have received credit by way of the published product, they can no longer be “scooped” with respect to the effects published in the paper. For example, if I show that X causes Y through mediator Z, and then post an SPSS file with X, Y, and Z, no one can come along and again show that X causes Y through mediator Z (even if they’re doing a direct replication, they would need new data, not my data).
What, then, does scooping based on posted data look like? Is the concern that someone will come along and test novel hypotheses with the variables that I used to test my original (now published) hypothesis (e.g., in the scenario above, maybe Z causes both X and Y). If so, there seem to be ways that authors can protect themselves from being scooped. For example, they could describe future directions in their research program in the GD of the original paper, thereby rendering any scooping attempts suspiciously close to plagiarism. In the above scenario, if I wrote in the GD that I am currently conducting a study to test whether Z causes both X and Y, someone who tried to publish this effect with my data would look bad. Yes, this would make it incumbent upon authors to know what their future directions would be, but asking authors to think hard about their research doesn’t seem like a bad idea.
I’d be interested to hear your thoughts on what form post-publication scooping would look like.
Thanks, Aaron. I agree the scooping concern is kind of overblown. I do think there are legitimate cases where putting one’s data online might allow others to publish novel findings that you were planning on getting around to yourself, but as I said, I think this is a pretty small proportion, and in any case I don’t see why the scientific community as a whole should care. My personal attitude regarding scooping is that if you’re a creative scientist with good ideas you’re very unlikely to run out of interesting hypotheses to test and good experiments to run. I’m not going to pretend it doesn’t annoy me a little bit when someone uses data I’ve obtained (e.g., Neurosynth) to do something clever that I was thinking about doing myself, but after a short bout of envy, I usually find it easy to remind myself that the whole point of sharing data is precisely so that people can use it to do cool stuff–and that I have plenty of other things to work on anyway.
This policy is a bit of a mixed-bag for me. In our lab, many of the data generated are in paper electroantennogram traces which we use for interpretation and guidance. The only time they become digitzed is in preparation of figures for manuscripts (which is time-consuming and frustrating!)
We have pushed for an all-digital workflow, which in the long run would save everyone a hell of a lot of time, but as students we have a hard time convincing our professor. As it stands, the majority of the dataset from my last PLOS paper is sitting in a drawer oxidizing on the cheap paper it was made on. I believe there are over 80 traces and just as many MS traces/datafiles. I would like to make it all available, but the work to do so would take forever! If I ever get a data request I would probably just snail mail photocopies and DVD’s off. On the plus side, the majority of the behavioral data (which were most important in arriving at a conclusion) are submitted as supplemental videos on FigShare.
I can definitely see the value of sharing the data and the impetus to do so will make everyone’s lives easier in the long run. I would definitely implement strong data curation and mandatory sharing if I ran my own lab, but as it stands, I don’t think that is ever likely to happen!
Thanks for the comment, Sean. I guess what I’d say to that is that once enough journals/publishers implement a PLOS-like policy on data, your professor will probably have no choice but to move to an all-digital workflow–which would be good for everyone in the long run. In the short term, of course, you can just submit to another journal.
It would be worth it just for the time savings in figure preparation alone!
Similar topic regarding the same issue in Traditional Chinese for your reference
https://www.ulatus.tw/academy/reviewers-now-get-orcid-credits-in-plos-journals/