A long, long time ago (in social media terms), I wrote a post defending Facebook against accusations of ethical misconduct related to a newly-published study in PNAS. I won’t rehash the study, or the accusations, or my comments in any detail here; for that, you can read the original post (I also recommend reading this or this for added context). While I stand by most of what I wrote, as is the nature of things, sometimes new information comes to light, and sometimes people say things that make me change my mind. So I thought I’d post my updated thoughts and reactions. I also left some additional thoughts in a comment on my last post, which I won’t rehash here.
Anyway, in no particular order…
I’m not arguing for a lawless world where companies can do as they like with your data
Some people apparently interpreted my last post as a defense of Facebook’s data use policy in general. It wasn’t. I probably brought this on myself in part by titling the post “In Defense of Facebook”. Maybe I should have called it something like “In Defense of this one particular study done by one Facebook employee”. In any case, I’ll reiterate: I’m categorically not saying that Facebook–or any other company, for that matter–should be allowed to do whatever it likes with its users’ data. There are plenty of valid concerns one could raise about the way companies like Facebook store, manage, and use their users’ data. And for what it’s worth, I’m generally in favor of passing new rules regulating the use of personal data in the private sector. So, contrary to what some posts suggested, I was categorically not advocating for a laissez-faire world in which large corporations get to do as they please with your information, and there’s nothing us little people can do about it.
The point I made in my last post was much narrower than that–namely, that picking on the PNAS study as an example of ethically questionable practices at Facebook was a bad idea, because (a) there aren’t any new risks introduced by this manipulation that aren’t already dwarfed by the risks associated with using Facebook itself (which is not exactly a high-risk enterprise to begin with), and (b) there are literally thousands of experiments just like this being conducted every day by large companies intent on figuring out how best to market their products and services–so Facebook’s study doesn’t stand out in any respect. My point was not that you shouldn’t be concerned about who has your data and how they’re using it, but that it’s deeply counterproductive to go after Facebook for this particular experiment when Facebook is of the few companies in this arena who actually (occasionally) publish the results of their findings in the scientific literature, instead of hiding them entirely from the light, as almost everyone else does. Of course, that will probably change as a result of this controversy.
I Was Wrong–A/B Testing Edition.
One claim I made in my last post that was very clearly wrong is this (emphasis added):
What makes the backlash on this issue particularly strange is that I’m pretty sure most people do actually realize that their experience on Facebook (and on other websites, and on TV, and in restaurants, and in museums, and pretty much everywhere else) is constantly being manipulated. I expect that most of the people who’ve been complaining about the Facebook study on Twitter are perfectly well aware that Facebook constantly alters its user experience““I mean, they even see it happen in a noticeable way once in a while, whenever Facebook introduces a new interface.
After watching the commentary over the past two days, I think it’s pretty clear I was wrong about this. A surprisingly large number of people clearly were genuinely unaware that Facebook, Twitter, Google, and other major players in every major industry (not just tech–also banks, groceries, department stores, you name it) are constantly running large-scale, controlled experiments on their users and customers. For instance, here’s a telling comment left on my last post:
The main issue I have with the experiment is that they conducted it without telling us. Given, that would have been counterproductive, but even a small adverse affect is still an adverse affect. I just don’t like the idea that corporations can do stuff to me without my consent. Just my opinion.
Similar sentiments are all over the place. Clearly, the revelation that Facebook regularly experiments on its users without their knowledge was indeed just that to many people–a revelation. I suppose in this sense, there’s potentially a considerable upside to this controversy, inasmuch as it has clearly served to raise awareness of industry-standard practices.
Questions about the ethics of the PNAS paper’s publication
My post focused largely on the question of whether the experiment Facebook conducted was itself illegal or unethical. I took this to be the primary concern of most lay people who have expressed concern about the episode. As I discussed in my post, I think it’s quite clear that the experiment itself is (a) entirely legal and that (b) any ethical objections one could raise are actually much broader objections about the way we regulate data use and consumer privacy, and have nothing to do with Facebook in particular. However, there’s a separate question that does specifically concern Facebook–or really, the authors of the PNAS paper–which is whether the authors, in their efforts to publish their findings, violated any laws or regulations.
When I wrote my post, I was under the impression–based largely on reports of an interview with the PNAS editor, Susan Fiske–that the authors had in fact obtained approval to conduct the study from an IRB, and had simply neglected to include that information in the text (which would have been an Editorial lapse, but not an unethical act). I wrote as much in a comment on my post. I was not suggesting–as some seemed to take away–that Facebook doesn’t need to get IRB approval. I was operating on the assumption that it had obtained IRB approval, based on the information available at the time.
In any case, it now appears that may not be exactly what happened. Unfortunately, it’s not yet clear exactly what did happen. One version of events people have suggested is that the study’s authors exploited a loophole in the rules by having Facebook conduct and analyze the experiment without the involvement of the other authors–who only contributed to the genesis of the idea and the writing of the manuscript. However, this interpretation is not unambiguous, and risks maligning the authors’ reputations unfairly, because Adam Kramer’s post explaining the motivation for the experiment suggests that the idea for the experiment originated entirely at Facebook, and was related to internal needs:
The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.
How you interpret the ethics of the study thus depends largely on what you believe actually happened. If you believe that the genesis and design of the experiment were driven by Facebook’s internal decision-making, and the decision to publish an interesting finding came only later, then there’s nothing at all ethically questionable about the authors’ behavior. It would have made no more sense to seek out IRB approval for this one experiment than for any of the other in-house experiments Facebook regularly conducts. And there is, again, no question whatsoever that Facebook does not have to get approval from anyone to do experiments that are not for the purpose of systematic, generalizable research.
Moreover, since the non-Facebook authors did in fact ask the IRB to review their proposal to use archival data–and the IRB exempted them from review, as is routinely done for this kind of analysis–there would be no legitimacy to the claim that the authors acted unethically. About the only claim one could raise an eyebrow at is that the authors “didn’t clearly state” their motivations. But since presenting a post-hoc justification for one’s studies that has nothing to do with the original intention is extremely common in psychology (though it shouldn’t be), it’s not really fair to fault Kramer et al for doing something that is standard practice.
If, on the other hand, the idea for the study did originate outside of Facebook, and the authors deliberately attempted to avoid prospective IRB review, then I think it’s fair to say that their behavior was unethical. However, given that the authors were following the letter of the law (if clearly not the spirit), it’s not clear that PNAS should have, or could have, rejected the paper. It certainly should have demanded that information regarding interactions with the IRB be included in the manuscript, and perhaps it could have published some kind of expression of concern alongside the paper. But I agree with Michelle Meyer’s analysis that, in taking the steps they took, the authors are almost certainly operating within the rules, because (a) Facebook itself is not subject to HHS rules, (b) the non-Facebook authors were not technically “engaged in research”, and (c) the archival use of already-collected data by the non-Facebook authors was approved by the Cornell IRB (or rather, the study was exempted from further review).
Absent clear evidence of what exactly happened in the lead-up to publication, I think the appropriate course of action is to withhold judgment. In the interim, what the episode clearly does do is lay bare how ill-prepared the existing HHS regulations are for dealing with the research use of data collected online–particularly when the data was acquired by private entities. Actually, it’s not just research use that’s problematic; it’s clear that many people complaining about Facebook’s conduct this week don’t really give a hoot about the “generalizable knowledge” side of things, and are fundamentally just upset that Facebook is allowed to run these kinds of experiments at all without providing any notification.
In my view, what’s desperately called for is a new set of regulations that provide a unitary code for dealing with consumer data across the board–i.e., in both research and non-research contexts. This leaves aside exactly what such regulations would look like, of course. My personal view is that the right direction to move in is to tighten consumer protection laws to better regulate management and use of private citizens’ data, while simultaneously liberalizing the research use of private datasets that have already been acquired. For example, I would favor a law that (a) forced Facebook and other companies to more clearly and explicitly state how they use their users’ data, (b) provided opt-out options when possible, along with the ability for users to obtain report of how their data has been used in the past, and (c) gave blanket approval to use data acquired under these conditions for any and all academic research purposes so long as the data are deidentified. Many people will disagree with this, of course, and have very different ideas. That’s fine; the key point is that the conversation we should be having is about how to update and revise the rules governing research vs. non-research uses of data in such a way that situations like the PNAS study don’t come up again.
What Facebook does is not research–until they try to publish it
Much of the outrage over the Facebook experiment is centered around the perception that Facebook shouldn’t be allowed to conduct research on its users without their consent. What many people mean by this, I think, is that Facebook shouldn’t be allowed to conduct any experiments on its users for purposes of learning things about user experience and behavior unless Facebook explicitly asks for permission. A point that I should have clarified in my original post is that Facebook users are, in the normal course of things, not considered participants in a research study, no matter how or how much their emotions are manipulated. That’s because the HHS’s definition of research includes, as a necessary component, that there be an active intention to contribute to generalizable new knowledge.
Now, to my mind, this isn’t a great way to define “research”–I think it’s a good idea to avoid definitions that depend on knowing what people’s intentions were when they did something. But that’s the definition we’re stuck with, and there’s really no ambiguity over whether Facebook’s normal operations–which include constant randomized, controlled experimentation on its users–constitute research in this sense. They clearly don’t. Put simply, if Facebook were to eschew disseminating its results to the broader community, the experiment in question would not have been subject to any HHS regulations whatsoever (though, as Michelle Meyer astutely pointed out, technically the experiment probably isn’t subject to HHS regulation even now, so the point is moot). Again, to reiterate: it’s only the fact that Kramer et al wanted to publish their results in a scientific journal that opened them up to criticism of research misconduct in the first place.
This observation may not have any impact on your view if your concern is fundamentally about the publication process–i.e., you don’t object to Facebook doing the experiment; what you object to is Facebook trying to disseminate their findings as research. But it should have a strong impact on your views if you were previously under the impression that Facebook’s actions must have violated some existing human subjects regulation or consumer protection law. The laws in the United States–at least as I understand them, and I admittedly am not a lawyer–currently afford you no such protection.
Now, is it a good idea to have two very separate standards, one for research and one for everything else? Probably not. Should Facebook be allowed to do whatever it wants to your user experience so long as it’s covered under the Data Use policy in the user agreement you didn’t read? Probably not. But what’s unequivocally true is that, as it stands right now, your interactions with Facebook–no matter how your user experience, data, or emotions are manipulated–are not considered research unless Facebook manipulates your experience with the express intent of disseminating new knowledge to the world.
Informed consent is not mandatory for research studies
As a last point, there seems to be a very common misconception floating around among commentators that the Facebook experiment was unethical because it didn’t provide informed consent, which is a requirement for all research studies involving experimental manipulation. I addressed this in the comments on my last post in response to other comments:
[I]t’s simply not correct to suggest that all human subjects research requires informed consent. At least in the US (where Facebook is based), the rules governing research explicitly provide for a waiver of informed consent. Directly from the HHS website:
An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:
(1) The research involves no more than minimal risk to the subjects;
(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
(3) The research could not practicably be carried out without the waiver or alteration; and
(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.
Granting such waivers is a commonplace occurrence; I myself have had online studies granted waivers before for precisely these reasons. In this particular context, it’s very clear that conditions (1) and (2) are met (because this easily passes the “not different from ordinary experience” test). Further, Facebook can also clearly argue that (3) is met, because explicitly asking for informed consent is likely not viable given internal policy, and would in any case render the experimental manipulation highly suspect (because it would no longer be random). The only point one could conceivably raise questions about is (4), but here again I think there’s a very strong case to be made that Facebook is not about to start providing debriefing information to users every time it changes some aspect of the news feed in pursuit of research, considering that its users have already agreed to its User Agreement, which authorizes this and much more.
Now, if you disagree with the above analysis, that’s fine, but what should be clear enough is that there are many IRBs (and I’ve personally interacted with some of them) that would have authorized a waiver of consent in this particular case without blinking. So this is clearly well within “reasonable people can disagree” territory, rather than “oh my god, this is clearly illegal and unethical!” territory.
I can understand the objection that Facebook should have applied for IRB approval prior to conducting the experiment (though, as I note above, that’s only true if the experiment was initially conducted as research, which is not clear right now). However, it’s important to note that there is no guarantee that an IRB would have insisted on informed consent at all in this case. There’s considerable heterogeneity in different IRBs’ interpretation of the HHS guidelines (and in fact, even across different reviewers within the same IRB), and I don’t doubt that many IRBs would have allowed Facebook’s application to sail through without any problems (see, e.g., this comment on my last post)–though I think there’s a general consensus that a debriefing of some kind would almost certainly be requested.
Can you tell me how the Facebook experiment was legal when, at the time the experimental data were gathered, their policy did not yet include the currently present clause that your data may be used for research purposes? As far as I can tell, they breached the terms of use at the time of the experiment.
Well, when I wrote that it was clearly legal, I wasn’t aware that the terms of service had changed relatively recently. I’ll probably need to update my post(s).
That said, while I’m not a lawyer, I do think Facebook still has a pretty clear-cut case when it says that the other (older) clauses cover already cover their A/B tests. The user agreement clearly says they may use your data to improve the service, and it’s hard to argue that A/B testing isn’t useful for improving the service. You can argue that this isn’t explicit enough for most people’s tastes, and I would agree with you; but that seems to me to be something that needs to be addressed by changes to the law. As it stands, I very much doubt a lawsuit along these lines would have a leg to stand on. But again–I’m not an expert in such matters.
I’m afraid you might be right. Though I wouldn’t be surprised that the insertion of the “research” clause right after the data collection for Kramer might not be entirely coincidental. Still, apart from legislation, perhaps it would be a good idea to really start implementing courses in schools that teach you that, in pure Philip K Dick style, the world on the web is per definition an illusion that is actively constructed to serve particular commercial or non-commercial interests. I admit I was also surprised to find so many were unaware that they were basically being shown whatever Fb or other want them to see…
whatever man but you can’t refuse from the fact that FB is now on the 2nd spot to surf mostly worldwide after google 🙂
http://edupearl.net
“In my view, what’s desperately called for is a new set of regulations that provide a unitary code for dealing with consumer data across the board–i.e., in both research and non-research contexts. This leaves aside exactly what such regulations would look like, of course. My personal view is that the right direction to move in is to tighten consumer protection laws to better regulate management and use of private citizens’ data, while simultaneously liberalizing the research use of private datasets that have already been acquired. ” – hear hear
Couldn’t agree more, Dude.
What your backpedaling, necessity for frequent updating and retraction of certain points of your analysis illustrate to me why Facebook shouldn’t have done the study, PNAS shouldn’t have published it, and you shouldn’t have rushed to the defense of Facebook. I think in hindsight you’re going to find this wasn’t good for your career, the study wasn’t good for Facebook and the publication by PNAS not in their interests. I don’t think the uproar was counterproductive at all.
Michael, in my part of the world, when you’re wrong about certain things, you admit that you’re wrong about certain things; when new facts come to light, you update and revise your views accordingly; and when nuance and uncertainty exist, you acknowledge that nuance and uncertainty exist. Your world may operate differently, and that’s fine.
I appreciate your concern for my career; I think I will be okay.
I don’t know about how the world operates. I try to operate the same way you’re suggesting. I’m glad we have that in common. Don’t worry, I’m not terribly concerned about your career. I know you’ll be fine 🙂
I think the thing that bothers me most–and would be surprised if you could have sympathy for this point, Tal–is that in reading both your “defense” and your “defense of your defense” you strike me as responding like an economist. Yes, that’s a knock on economists in general, and I’m comfortable with the overgeneralization. But back to the point: economists often can’t understand the “emotional” or “spiritual” or “psychic” or “moral” argument. They just keep responding with “facts” and nuanced word-sorcery, even to the point of being genuinely hurt and confused as to why people have objections to their pronouncements and find them at odds with things like “warmth” and “human connection.” After all, they’re just being precise and logical and correct.
It’s not just economists; Steven Pinker strikes me the same way when he argues in “The Better Angels of Our Nature,” for example, that we should feel heartened by the fact that on a worldwide basis, both frequency and volume of violent incidents has seen a historical decrease. I find the same kind of stupefied reliance on “the accurate argument” in discussions of net neutrality, economic inequality, and a host of other topics, controversial and not so controversial.
I have no doubt that you’re a smart man, Tal. I’m not arguing that your responses are not “accurate” or “factual” or “thoughtful.” My own judgment, though, is that there is something ethically faulty with what drives your response and the logic behind it.
Sadly, I agree. It never looks good for a psychologist to go on the record as saying, among other things, that doing research on minors without their informed consent is just essentially the cost of doing research on the internet (you do know that FB admitted to the WSJ that they are aware that minors were included in the experiment, right?).
People obsessed with BIG DATA seem to have forgotten that humans are behind that data and that confidentiality of that data is only ONE possible risk that experimental research poses. You don’t cover all human subjects concerns by anonymizing the data, not even close.
I think you should go back and re-read what I said. I most certainly did not say that “doing research on minors without their informed consent is just essentially the cost of doing research on the internet”. What I pointed out was that there is no way to verify identity on the internet, and thus it’s effectively impossible to prevent a motivated 14 year old from participating in an experiment intended for adults. I don’t think that much is debatable; it strikes me as an obvious fact of life on the internet, and every psychologist who conducts research on the web (and there are very many of us) recognizes this.
I also don’t appreciate your insinuation that I’m not aware that there are human beings on the other end of the line. I (and everyone else who collects data online) take the ethical responsibility to protect children very seriously. But I also recognize that if 1 in, say, 500 participants is a minor who is determined to ignore the instructions that only adults are to participate, and is willing to lie about their age, there is not much anyone can do short of never collecting data on the internet, which does not seem like a reasonable and measured response to the problem.
Note that this is actually not solely a problem for online studies (though it’s certainly more of one); it’s also an issue in laboratory-based studies. If a volunteer participant is 17 years old (as occasionally happens with 1st year university students), but willing to lie about their age to participate, there are not many researchers who are going to be able to catch that deception, as it’s not routine to ask student volunteers for a photo ID. Out of necessity, everyone who conducts research online takes their participants at their word in lab-based studies, and we have no choice but to do the same online. This does unfortunately mean that some very small proportion of subjects will, unbeknownst to the researcher, be minors who should not have participated in the study. But if your solution to this problem is to entirely stop doing research online (and, for that matter, to start requiring proof of birth in offline studies), I don’t think you will find very many sympathetic ears–not among researchers, and not among IRBs.
Right. But he hasn’t backpedalled in this post? Have you read it?
1) passive vs active research
There’s a significant difference between passive and active research in regards big-data. In passive research we can look at what the user is seeing in terms of keywords and the like and see what appear to be their responses to the inbound emails, news stories and so on. With one billion odd people on board, Facebook has in principle the opportunity to study the responses of a seriously statistically significant sample of humanity.
On the other hand, active research involves altering the feeds based on some criteria and observing the effects.
There are obvious differences between these two scenarios but as many will point out Facebook and others already manipulate the feeds that users see – the ads, the search results and so on.
It wasn’t always this way. In the early days of the internet when there was a baby WWW and Gopher was perhaps used more than the NCSA browser. There was a paucity of content and one really wanted to see all of it – what comparatively little there was. As it grew there was curation for the entire user base of a service not individualized tailoring based on prior decisions of the user. Nowadays we have ads appearing based on a previous search and news stories appear near the top of feeds based on previous click-throughs. Yet, in all of this there is somehow the user’s decision processes at work – even if the user is not particularly well informed on the way that their decisions will likely impact what they are confronted with in the future. This is the dominant form of manipulation that Facebook and Google and so on engage in.
It is worth noting that at this time the placement of ads based on prior searches can be a spectacular fail if the user already purchased an item but the ad generator doesn’t have access to that information and continues to use “valuable†ad space uselessly hawking such items. This is an area of rapid development as vast eco-systems that provide round-trip user experience grow.
The distinction is between the user’s decisions influencing the feeds in a manner that can be reasonably said to be useful or beneficial for the user versus an independent source of selection manipulating the feed content for purposes that have nothing to do with the user’s interests or intent. That such manipulation may substantively alter the emotional state of the user, ultimately in a controlled and possibly predatory manner, is certainly an indication of the sort of action that we should not encourage or condone.
2) informed consent
It is specious to argue that Terms of Service having been clicked on represent informed consent, even if the user has read them. So I sign some such ToS two years ago and today some researcher decides to include me in an active study. IF I truly understood the ToS to mean that at anytime for an unknown duration I am subject to unknown manipulations of my environment for unknown purposes, Perhaps I would not have signed or I would be continually apprehensive when using the service, but more likely I would be pretty much blind to the possibilities.
I really doubt that most users have any idea of the extent to which big-data is used to incorporate user decisions into the provision of future content (i.e., manipulation that is in some defensible sense tied to the user’s intent versus active manipulation based on the intent of anonymous others) and the extent to which the big-data about user’s decisions and linkages with other users are monetized by companies such as Google and Facebook. There is a very large asymmetry between the apparent benefits that an individual user gets in maintaining some connections with geographically disperse friends and family and the concentration of wealth and power (through use of big-data) that is garnered by the companies such as Facebook.
Disingenuous at best.What the author is essentially saying is that everybody does it so why not Facebook. Well everybody also includes tobacco companies lying about cigarettes, the food industry concealing the depth of research undertaken to hook people on to food and on the most basic interpersonal level lying.FB is a company and I don’t expect them to go beyond what is in their best ( financial ) interests.But the danger lies in people trying to brush this off as inconsequential.No sir, this is a case of a company as rapacious as a credit card company burying its penalties in a ton of fine print.But worse is the enabling of such tactics by a respected university and a leading journal.
The need to refine and restate points is becoming necessary for you since the parties involved have been opaque and obfuscating at every turn.
No, what the author (that’s me, right?) is saying is that because everybody does it, it’s a mistake to focus specifically on Facebook. That does not mean it’s inconsequential. Far from it: as I note above, I’m personally very much in favor of passing new regulations that govern experimentation with human subjects in a non-academic research context. But the point I’m making here is that people who are upset about this issue should recognize that it’s a pervasive problem that can’t be solved simply by getting Facebook to amend its Terms of Service.
To borrow your cigarette analogy, it’s as if we caught Marlboro adding extra nicotine to cigarettes, and instead of saying “hey, maybe we should regulate this entire industry, because it’s likely that everyone else is also doing this”, we said “hey, let’s make sure Marlboro promises not to do it again, and then the problem is solved.”
Actually, the credit card companies, with you bring up, are an even better example: in response to predatory practices there, we didn’t just cherry-pick one company and make an example out of them; in the US, Congress passed a comprehensive reform law that caused credit card default rates to plummet. I think that’s exactly the right model for this problem too. Pretending that this is a Facebook-specific problem is, as I said in my previous post, counterproductive, in that it makes an example out of precisely the wrong company (i.e., the one that’s actually willing to share at least part of its internal process publicly).
Facebook may be the company willing to show some of their internal workings, but they are also the company with the most users and the biggest potential for abuse of those users. At this time, I don’t think there’s parity among the social network companies. Pretending that Facebook is just one of many neglects the very specific way in which people use it that is not replicated by any of the other sites. I agree that regulating the industry is important, but at this moment, I think Facebook does have a very specific role that seems monopoly-like in some ways.
What I wonder about is how the HHS rules re: research on minors plays into all this. I’m looking at the part that says “However, the exemption at §46.101(b)(2) for research involving survey or interview procedures or observations of public behavior does not apply to research covered by this subpart [research on children], except for research involving observation of public behavior when the investigator(s) do not participate in the activities being observed.â€
There’s two parts to this argument: the first being that facebook could claim that they were observing public behavior. This might not be the easiest exemption to choose, but I could see an argument that activities on facebook (ie. status posts) are public behavior. But then, if they claimed exemption under that provision, in the case of minors it would again matter if Facebook gathered the data before, or if the researchers helped plan the experimental manipulation. If the investigators were involved in the gathering of the dataset, then they would not be eligible for an exemption in the case of children under 18. This obviously would be different if Facebook had made a statement about excluding users under 18 from the study, but to my knowledge they did not.
This is all my own speculation, of course, and my inability to really understand what is allowed and isn’t probably supports your point that clearer regulations are needed.
Talking about this issue in terms of “what companies can do with your data” is a little bit limited, I think. If a platform’s role is to facilitate conversation and allow connection between people, then the real concern is when they start manipulating the flow, or people’s relationships for their own benefit.
Data has an interpretation of being a part of a scientific experiment. If I write something that other people see, or don’t see, is this data? Only with respect to a certain research question. From my perspective of wanting to communicate, it is not data.
Just pointing out that calling stuff data in social networks is already framing the question in terms of a scientific experiment. Doing such an experiment may be fine, and interesting, as long as people can consent to it, but its certainly not the perspective of the people using the site.
I agree with this article but as we know Facebook is an 2nd largest website to surf in worldwide rather than Google.
http://pillix.in
I know I’m a bit late to this party, but it’s been quite a while since I read your blog, and I’m just now catching up.
This actually reminds me of something my University did that I objected to. I will spare you the details, but suffice it to say, it has to do with Phishing, and I thought it was incredibly unethical.
The main reason was not because the “trickery” or “manipulation” violated the informed consent guidelines; as you point out, there are exceptions. Rather, I objected (unsuccessfully) because the experiment was applied to the entire freshman class, a large number of subjects who have many psychological backgrounds and have not been pre-screened for things like depression or suicidal tendencies. When the “trick” was revealed to them, I thought the experiment could pose more than “minimal risk” to some individuals as “the last straw”.
I take your points about the IRB and “internal” studies; but my heart, rather than my head, wants to object to this on the same moral grounds, even if legality and logistics don’t permit anything to be done. Especially because dead things for days in your feed is a lot more dramatic than what I objected to.