I hate open science

Now that I’ve got your attention: what I hate—and maybe dislike is a better term than hate—isn’t the open science community, or open science initiatives, or open science practices, or open scientists… it’s the term. I fundamentally dislike the term open science. For the last few years, I’ve deliberately tried to avoid using it. I don’t call myself an open scientist, I don’t advocate publicly for open science (per se), and when people use the term around me, I often make a point of asking them to clarify what they mean.

This isn’t just a personal idiosyncracy of mine in a chalk-on-chalkboard sense; I think at this point in time there are good reasons to think the continued use of the term is counterproductive, and we should try to avoid it in most contexts. Let me explain.

It’s ambiguous

At SIPS 2019 last week (SIPS is the Society for Improvement of Psychological Science), I had a brief chat with a British post-undergrad student who was interested in applying to graduate programs in the United States. He asked me what kind of open science community there was at my home institution (the University of Texas at Austin). When I started to reply, I realized that I actually had no idea what question the student was asking me, because I didn’t know his background well enough to provide the appropriate context. What exactly did he mean by “open science”? The term is now used so widely, and in so many different ways, that the student could plausibly have been asking me about any of the following things, either alone or in combination:

  • Reproducibility. Do people [at UT-Austin] value the ability to reproduce, computationally and/or experimentally, the scientific methods used to produce a given result? More concretely, do they conduct their analyses programmatically, rather than using GUIs? Do they practice formal version control? Are there opportunities to learn these kinds of computational skills?
  • Accessibility. Do people believe in making their scientific data, materials, results, papers, etc. publicly, freely, and easily available? Do they work hard to ensure that other scientists, funders, and the taxpaying public can easily get access to what scientists produce?
  • Incentive alignment. Are there people actively working to align individual incentives and communal incentives, so that what benefits an individual scientist also benefits the community at large? Do they pursue local policies meant to promote some of the other practices one might call part of “open science”?
  • Openness of opinion. Do people feel comfortable openly critiquing one another? Is there a culture of discussing (possibly trenchant) problems openly, without defensiveness? Do people take discussion on social media and post-publication review forums seriously?
  • Diversity. Do people value and encourage the participation in science of people from a wide variety of ethnicities, genders, skills, personalities, socioeconomic strata, etc.? Do they make efforts to welcome others into science, invest effort and resources to help them succeed, and accommodate their needs?
  • Metascience and informatics. Are people thinking about the nature of science itself, and reflecting on what it takes to promote a healthy and productive scientific enterprise? Are they developing systematic tools or procedures for better understanding the scientific process, or the work in specific scientific domains?

This is not meant to be a comprehensive list; I have no doubt there are other items one could add (e.g., transparency, collaborativeness, etc.). The point is that open science is, at this point, a very big tent. It contains people who harbor a lot of different values and engage in many different activities. While some of these values and activities may tend to co-occur within people who call themselves open scientists, many don’t. There is, for instance, no particular reason why someone interested in popularizing reproducible science methods should also be very interested in promoting diversity in science. I’m not saying there aren’t people who want to do both (of course there are); empirically, there might even be a modest positive correlation—I don’t know. But they clearly don’t have to go together, and plenty of people are far more invested in one than in the other.

Further, as in any other enterprise, if you monomaniacally push a single value hard enough, then at a certain point, tensions will arise even between values that would ordinarily co-exist peacefully if each given only partial priority. For example, if you think that doing reproducible science well requires a non-negotiable commitment to doing all your analyses programmatically, and maintaining all your code under public version control, then you’re implicitly condoning a certain reduction in diversity within science, because you insist on having only people with a certain set of skills take part in science, and people from some backgrounds are more likely than others (at least at present) to have those skills. Conversely, if diversity in science is the thing you value most, then you need to accept that you’re effectively downgrading the importance of many of the other values listed above in the research process, because any skill or ability you might use to select or promote people in science is necessarily going to reduce (in expectation) the role of other dimensions in the selection process.

This would be a fairly banal and inconsequential observation if we lived in a world where everyone who claimed membership in the open science community shared more or less the same values. But we clearly don’t. In highlighting the ambiguity of the term open science, I’m not just saying hey, just so you know, there are a lot of different activities people call open science; I’m saying that, at this point in time, there are a few fairly distinct sub-communities of people that all identify closely with the term open science and use it prominently to describe themselves or their work, but that actually have fairly different value systems and priorities.

Basically, we’re now at the point where, when someone says they’re an open scientist, it’s hard to know what they actually mean.

It wasn’t always this way; I think ten or even five years ago, if you described yourself as an open scientist, people would have identified you primarily with the movement to open up access to scientific resources and promote greater transparency in the research process. This is still roughly the first thing you find on the Wikipedia entry for Open Science:

Open science is the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of an inquiring society, amateur or professional. Open science is transparent and accessible knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.

That was a fine definition once upon a time, and it still works well for one part of the open science community. But as a general, context-free definition, I don’t think it flies any more. Open science is now much broader than the above suggests.

It’s bad politics

You might say, okay, but so what if open science is an ambiguous term; why can’t that be resolved by just having people ask for clarification? Well, obviously, to some degree it can. My response to the SIPS student was basically a long and winding one that involved a lot of conditioning on different definitions. That’s inefficient, but hopefully the student still got the information he wanted out of it, and I can live with a bit of inefficiency.

The bigger problem though, is that at this point in time, open science isn’t just a descriptive label for a set of activities scientists often engage in; for many people, it’s become an identity. And, whatever you think the value of open science is as an extensional label for a fairly heterogeneous set of activities, I think it makes for terrible identity politics.

There are two reasons for this. First, turning open science from a descriptive label into a full-blown identity risks turning off a lot of scientists who are either already engaged in what one might otherwise call “best practices”, or who are very receptive to learning such practices, but are more interested in getting their science done than in discussing the abstract merits of those practices or promoting their use to others. If you walk into a room and say, in the next three hours, I’m going to teach you version control, and there’s a good chance this could really help your research, probably quite a few people will be interested. If, on the other hand, you walk into the room and say, let me tell you how open science is going to revolutionize your research, and then proceed to either mention things that a sophisticated audience already knows, or blitz a naive audience with 20 different practices that you describe as all being part of open science, the reception is probably going to be frostier.

If your goal is to get people to implement good practices in their research—and I think that’s an excellent goal!—then it’s not so clear that much is gained by talking about open science as a movement, philosophy, culture, or even community (though I do think there are some advantages to the latter). It may be more effective to figure out who your audience is, what some of the low-hanging fruit are, and focus on those. Implying that there’s an all-or-none commitment—i.e., one is either an open scientist or not, and to be one, you have to buy into a whole bunch of practices and commitments—is often counterproductive.

The second problem with treating open science as a movement or identity is that the diversity of definitions and values I mentioned above almost inevitably leads to serious rifts within the broad open science community—i.e., between groups of people who would have little or no beef with one another if not for the mere fact that they all happen to identify as open scientists. If you spend any amount of time on social media following people whose biography includes the phrases “open science” or “open scientist”, you’ll probably know what I’m talking about. At a rough estimate, I’d guess that these days maybe 10 – 20% of tweets I see in my feed containing the words “open science” are part of some ongoing argument between people about what open science is, or who is and isn’t an open scientist, or what’s wrong with open science or open scientists—and not with substantive practices or applications at all.

I think it’s fair to say that most (though not all) of these arguments are, at root, about deep-seated differences in the kinds of values I mentioned earlier. People care about different things. Some people care deeply about making sure that studies can be accurately reproduced, and only secondarily or tertiarily about the diversity of the people producing those studies. Other people have the opposite priorities. Both groups of people (and there are of course many others) tend to think their particular value system properly captures what open science is (or should be) all about, and that the movement or community is being perverted or destroyed by some other group of people who, while perhaps well-intentioned (and sometimes even this modicum of charity is hard to find), just don’t have their heads screwed on quite straight.

This is not a new or special thing. Any time a large group of people with diverse values and interests find themselves all forced to sit under a single tent for a long period of time, divisions—and consequently, animosity—will eventually arise. If you’re forced to share limited resources or audience attention with a group of people who claim they fill the same role in society that you do, but who you disagree with on some important issues, odds are you’re going to experience conflict at some point.

Now, in some domains, these kinds of conflicts are truly unavoidable: the factors that introduce intra-group competition for resources, prestige, or attention are structural, and resolving them without ruining things for everyone is very difficult. In politics, for example, one’s nominal affiliation with a political party is legitimately kind of a big deal. In the United States, if a splinter group of disgruntled Republican politicians were to leave their party and start a “New Republican” party, they might achieve greater ideological purity and improve their internal social relations, but the new party’s members would also lose nearly all of their influence and power pretty much overnight. The same is, of course, true for disgruntled Democrats. The Nash equilibrium is, presently, for everyone to stay stuck in the same dysfunctional two-party system.

Open science, by contrast, doesn’t really have this problem. Or at least, it doesn’t have to have this problem. There’s an easy way out of the acrimony: people can just decide to deprecate vague, unhelpful terms like “open science” in favor of more informative and less controversial ones. I don’t think anything terrible is going to happen if someone who previously described themselves as an “open scientist” starts avoiding that term and instead opts to self-describe using more specific language. As I noted above, I speak from personal experience here (if you’re the kind of person who’s more swayed by personal anecdotes than by my ironclad, impregnable arguments). Five years ago, my talks and papers were liberally sprinkled with the term “open science”. For the last two or three years, I’ve largely avoided the term—and when I do use it, it’s often to make the same point I’m making here. E.g.,:

For the most part, I think I’ve succeeded in eliminating open science from my discourse in favor of more specific terms like reproducibility, transparency, diversity, etc. Which term I use depends on the context. I haven’t, so far, found myself missing the term “open”, and I don’t think I’ve lost brownie points in any club for not using it more often. I do, on the other hand, feel very confident that (a) I’ve managed to waste fewer people’s time by having to follow up vague initial statements about “open” things with more detailed clarifications, and (b) I get sucked into way fewer pointless Twitter arguments about what open science is really about (though admittedly the number is still not quite zero).

The prescription

So here’s my simple prescription for people who either identify as open scientists, or use the term on a regular basis: Every time you want to use the term open science—in your biography, talk abstracts, papers, tweets, conversation, or whatever else—pause and ask yourself if there’s another term you could substitute that would decrease ambiguity and avoid triggering never-ending terminological arguments. I’m not saying that the answer will always be yes. If you’re confident that the people you’re talking to have the same definition of open science as you, or you really do believe that nobody should ever call themselves an open scientist unless they use git, then godspeed—open science away. But I suspect that for most uses, there won’t be any such problem. In most instances, “open science” can be seamlessly replaced with something like “reproducibility”, “transparency”, “data sharing”, “being welcoming”, and so on. It’s a low-effort move, and the main effect of making the switch is that other people will have a clearer understanding of what you mean, and may be less inclined to argue with you about it.

Postscript

Some folks on twitter were concerned that this post makes it sound as if I’m passing off prior work and ideas as my own (particularly as relates to the role of diversity in open science). So let me explicitly state here that I don’t think any of the ideas expressed in this post are original to me in any way. I’ve heard most (if not all) expressed many times by many people in many contexts, and this post just represents my effort to distill them into a clear summary of my views.

estimating the influence of a tweet–now with 33% more causal inference!

Twitter is kind of a big deal. Not just out there in the world at large, but also in the research community, which loves the kind of structured metadata you can retrieve for every tweet. A lot of researchers rely heavily on twitter to model social networks, information propagation, persuasion, and all kinds of interesting things. For example, here’s the abstract of a nice recent paper on arXiv that aims to  predict successful memes using network and community structure:

We investigate the predictability of successful memes using their early spreading patterns in the underlying social networks. We propose and analyze a comprehensive set of features and develop an accurate model to predict future popularity of a meme given its early spreading patterns. Our paper provides the first comprehensive comparison of existing predictive frameworks. We categorize our features into three groups: influence of early adopters, community concentration, and characteristics of adoption time series. We find that features based on community structure are the most powerful predictors of future success. We also find that early popularity of a meme is not a good predictor of its future popularity, contrary to common belief. Our methods outperform other approaches, particularly in the task of detecting very popular or unpopular memes.

One limitation of much of this body of research is that the data are almost invariably observational. We can build sophisticated models that do a good job predicting some future outcome (like meme success), but we don’t necessarily know that the “important” features we identify carry any causal influence. In principle, they could be completely epiphenomenal–for example, in the study I linked to, maybe the community structure features are just a proxy for some other, causally important, factor (e.g., whether the content of a meme has sufficiently broad appeal to attract attention from many different kinds of people). From a predictive standpoint, this may not matter much; if your goal is just to passively predict whether a meme is going to be successful or not, it’s irrelevant whether or not the features you’re using are doing causal work. On the other hand, if you want to actively design memes in such a way as to maximize their spread, the ability to get a handle on causation starts to look pretty important.

How can we estimate the direct causal influence of a tweet on the downstream popularity of a meme? Here’s a simple and (I suspect) very feasible idea in two steps:

  1. Create a small web app that allows any existing Twitter user to register via Twitter authentication. On signing up, a user has to specify just one (optional) setting: the proportion of their intended retweets they’re willing to withhold. Let’s this the Withholding Fraction (WF).
  2. Every time (or at least some of the time) a registered user wants to retweet a particular tweet*, they do so via the new web app’s interface (which has permission to post to the user’s Twitter account) instead of whatever interface they’re currently using. The key is that the retweet isn’t just obediently passed along; instead, the target tweet is retweeted successfully with probability (1 – WF), and randomly suppressed from the user’s stream with probability (WF).

Doing this  would allow the community to very quickly (assuming rapid adoption, which seems reasonably likely) build up an enormous database of tweets that were targeted for retweeting by an active user, but randomly assigned to fail with some known probability. Researchers would then be able to directly quantify the causal impact of individual retweets on downstream popularity–and to estimate that influence conditional on all of the other standard variables, like the retweeter’s number of followers, the content of the tweet, etc. Of course, this still wouldn’t get us to true experimental manipulation of such features (i.e., we wouldn’t be manipulating users’ follower networks, just randomly omitting tweets from users with different followers), but it seems like a step in the right direction**.

I figure building a barebones app like this would take an experienced developer familiar with the Twitter OAuth API just a day or two. And I suspect many people (myself included!) would be happy to contribute to this kind of experiment, provided that all of the resulting data were made public. (I’m aware that there are all kinds of restrictions on sharing assembled Twitter datasets, but we’re not talking about sharing firehose dumps here, just a restricted set of retweets from users who’ve explicitly given their consent to have the data used in this way.)

Has this kind of thing already been done? If not, does anyone want to build it?

 

* It doesn’t just have to be retweets, of course; the same principle would work just as well for withholding a random fraction of original tweets. But I suspect not many users would be willing to randomly eliminate a proportion of their original content from the firehose.

** If we really wanted to get close to true random assignment, we could potentially inject selected tweets into random users streams based on selected criteria. But I’m not sure how many tweeps would consent to have entirely random retweets published in their name (I probably wouldn’t), so this probably isn’t viable.

tuesday at 3 pm works for me

Apparently, Tuesday at 3 pm is the best time to suggest as a meeting time–that’s when people have the most flexibility available in their schedule. At least, that’s the conclusion drawn by a study based on data from WhenIsGood, a free service that helps with meeting scheduling. There’s not much to the study beyond the conclusion I just gave away; not surprisingly, people don’t like to meet before 10 or 11 am or after 4 pm, and there’s very little difference in availability across different days of the week.

What I find neat about this isn’t so much the results of the study itself as the fact that it was done at all. I’m a big proponent of using commercial website data for research purposes–I’m about to submit a paper that relies almost entirely on content pulled using the Blogger API, and am working on another project that makes extensive use of the Twitter API. The scope of the datasets one can assemble via these APIs is simply unparalleled; for example, there’s no way I could ever realistically collect writing samples of 50,000+ words from 500+ participants in a laboratory setting, yet the ability to programmatically access blogspot.com blog contents makes the task trivial. And of course, many websites collect data of a kind that just isn’t available off-line. For example, the folks at OKCupid are able to continuously pump out interesting data on people’s online dating habits because they have comprehensive data on interactions between literally millions of prospective dating partners. If you want to try to generate that sort of data off-line, I hope you have a really large lab.

Of course, I recognize that in this case, the WhenIsGood study really just amounts to a glorified press release. You can tell that’s what it is from the URL, which literally includes the “press/” directory in its path. So I’m certainly not naive enough to think that Web 2.0 companies are publishing interesting research based on their proprietary data solely out of the goodness of their hearts. Quite the opposite. But I think in this case the desire for publicity works in researchers’ favor: It’s precisely because virtually any press is considered good press that many of these websites would probably be happy to let researchers play with their massive (de-identified) datasets. It’s just that, so far, hardly anyone’s asked. The Web 2.0 world is a largely untapped resource that researchers (or at least, psychologists) are only just beginning to take advantage of.

I suspect that this will change in the relatively near future. Five or ten years from now, I imagine that a relatively large chunk of the research conducted in many area of psychology (particularly social and personality psychology) will rely heavily on massive datasets derived from commercial websites. And then we’ll all wonder in amazement at how we ever put up with the tediousness of collecting real-world data from two or three hundred college students at a time, when all of this online data was just lying around waiting for someone to come take a peek at it.