While the intention of the bill seems good, I worry that this will become the next cookie popup. From the text of the bill:
"Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis, but not less than once each 90 days, any experiments or studies that user was subjected to or enrolled in"
Based on their definitions of behavioral research, simple analytics would fall under the purview. This means that every site that has over 100m MAU will have to have a popup disclosing that they're running analytics and A/B testing (because honestly this won't stop any of them from doing it - these things are industry standards).
I don't need to be informed that Facebook tracks what links I click on their site. I don't need google to tell me that they have a history of every search I've made, and that they tailored those results based on my past searches. We're trying to create a safe space web at the detriment of UX. I support a lot of the stuff around younger kids, but I think the stuff for adults is just going to become a nuisance.
- We slightly changed the color of the submit button
- The forward button was removed from the context menu on chat bubbles
- An infrastructure change reduces the load time of the comments on articles by 10ms
- The weekly ad relevance model update is being certified, yielding a 0.000001 increase in CTR for small segments of the market.
On average, much more mundane than "human psychological experiments".
That's the point. It's like requiring a report to be filed whenever there is a "use of force" but then applying that rule using the Newtonian definition of force. Sat in your chair? File a report. Stand back up? File a report. Filed a report? File a report.
Worse, this kind of thing can happen retroactively. If you discover that your numbers are different than expected, but you hadn't declared any experiment, comparing what changed before and after is the experiment. But you hadn't notified those users that you were doing an experiment because you hadn't expected to have any reason to, so now you can't even have the people with the before and after data communicate with the people who know what changes were made to the system in that time frame because comparing that information would constitute doing the experiment.
It's like telling a car company they can't see their sales data when deciding which models to continue producing because it would constitute doing a psychological experiment on what kind of cars people like.
(On the other hand, it sounds like the law would only apply to entities the size of Facebook, and screw those guys in general. But it really is kind of a silly rule.)
These services are running huge numbers of experiments in order to maximize engagement. Then everyone wonders what happened when tons of people on Facebook end up depressed and tons of people on YouTube end up radicalized by extremist rabbit holes.
It's death by a thousand cuts.
If you have some food which is infected with salmonella, you don't pick it apart with a microscope at the level of individual cells and try to separate it back out, you just throw the whole thing away and eat something else.
In this context the contaminated food is Facebook.
IIUC, in order to do that comparison you still need to collect data. You may throw that data away and your experiment ends right there, you may do analysis on that data, but you said it yourself - it is an experiment.
And what does opt-in even look like? No matter whether you want to "participate in the experiment" the submit button still needs to be some color for you, which is the only part of the "experiment" with any direct effect on you.
The concern with psychological experiments isn't that they're collecting data. That's a different bailiwick. The major issue with psychological experiments is that they may have significant direct psychological consequences. If you show people only news stories about mass shootings and conflict it may cause them to become violent or suicidal -- which has nothing to do with whether you collect data on it or what you do with it afterwards. The experiment itself is the harm.
Which means we would need some kind of principled and efficient way of distinguishing those kinds of "real" experiments from just measuring what happens when you make a subtle adjustment to a context menu.
> Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis ... any experiments or studies that user was subjected to or enrolled in
means that every A/B test needs to be disclosed.
> (6) LARGE ONLINE OPERATOR
> The term "large online operator" means any person that—
> (A) provides an online service;
> (B) has more than 100,000,000 authenticated users of an online service in any 30 day period; and
> (C) is subject to the jurisdiction of the Commission under the Federal Trade Commission Act (15 U.S.C. 41 et seq.).
Presumably there are definitions of other terms in that sentence (e.g. experiments, studies).
Here, it is:
> BEHAVIORAL OR PSYCHOLOGICAL EXPERIMENTS OR RESEARCH—
> The term "behavioral or psychological experiments or research" means the study, including through human experimentation, of overt or observable actions and mental phenomena inferred from behavior, including interactions between and among individuals and the activities of social groups.
Honestly, I don't think this is clear enough. Person clicks BLUE instead of GREEN may or may not fall under this definition. I don't think it should, but if I have 100M+ authenticated users per month, I'm probably going to put up a notice anyway.
I'm not sure why you think an A/B test is not covered by
> the study ... of overt or observable actions and mental phenomena inferred from behavior
but it seem to me to be the very thing targeted by this legislation. I agree that the end result: Google disclosing 100k A/B tests each quarter is a grotesque tax on private industry without any social gain. However, it doesn't strike me as terribly out of line from other legislation in its effect.
I'm not a lawyer though.
Ambiguity in law is handled in at least two different ways: in criminal matters, ambiguity is read in the most favorable light for the defendant; in regulatory matters, the interpretation adopted by the regulatory agency responsible for the law's implementation is considered binding so long as it is "permissible construction" of the statute. The latter is commonly known as the "Chevron doctrine"
The long and short of it is that this bill, if enacted, will mean whatever the Executive Branch says it means. If it's particularly egregious, then their interpretation will be challenged in court and perhaps eventually trimmed down a bit.
I'd rather they not ask me anything and just assume consent. It's not like EU legislation is going to stop a Chinese company anyway.
But even for the people who want to read the agreement, it would be much better if this was implemented as a browser feature, giving users control and consistency, instead of different popups on each site.
If I get a popup listing various kinds of data collection that the site wants to do, and lists of "trusted partners" it will be shared with, etc., I generally refuse everything except "essential". If the site's idea of what is "essential" sounds excessive compared to the use I expect to make of it (just how much tracking is reasonably required in order to read an article?), I simply won't use it.
And if it makes the process of refusing consent particularly opaque or cumbersome (in violation of GDPR requirements), I certainly won't trust or use the site at all (I'm looking at you, Oath...)
Similarly GDPR should have been about a requirement to provide s page listing all of the user data, not about endless popups on every site asking to accept cookies. If i use a browser that supports cookies then i already accept them.
I've always thought about trying to scrape government websites for law changes and drafts, shift them onto github and give the respective politicians clearly labelled pseudo accounts in an organisation in which they all vote on pull requests.
Wait, does this refer to A/B testing?
Many types of A/B tests are designed to increase conversion - to get a user to buy something, or signup, etc. I have personally (and I'm sure lots of folks on this site) been involved in A/B tests that specifically test what many would consider "dark patterns" to increase conversion.
Just take a look at Booking.com, which is famous for their A/B testing. Right now I get a popup banner when I hit that which says "Welcome back! It's always a pleasure to see you! Sign in to see deals of up to 50% off." I guarantee the text in that banner has been A/B tested 9 ways to Sunday. I'd even bet they tested the percentage amount (i.e. whether it was 50%, 30% etc.) Of course "up to 50%" could mean 0%, which it probably is in most cases. And the whole purpose of that banner is to get you to authenticate and sign in, so they can track you better.
So yes, it most definitely will apply to certain forms of A/B testing. That also appears to be the point.
This is a good thing
Not always. A/B testing is the reason that we now have weapons-grade clickbait headlines, and those terrible little grids of ads at the bottom of blog posts. Neither of those are good things.
Obviously a good number of A/B tests are pretty innocent, but if it's non-trivial to differentiate between them and https://en.wikipedia.org/wiki/Nudge_theory then I'm 100% for completely ditching A/B tests.
The answer is: modern science has always used a form of A/B testing.
Gathering data from a million people on which shade of red makes them more likely to click a button is entirely different today due to the scale, how cheap it is to setup, and how cheap it is to tweak. This data can then be used to "nudge" people towards a direction that you benefit from (and they may or may not benefit from, and society at large may or may not benefit from). At scale, these very small nudges can have an impact. The unregulated methods we use for this keep improving (AI).
Not to throw shade, but there's a reason why Amazon has been hiring behavioural psychologists. We should be aware and thinking about this.
Perhaps we need to simultaneously inform people through better education at the same time for how to resist the urge to spend borrowed money whenever possible?
This isn't just about good UI. Not everyone is using these sort of behavioural tests to present a better UI. It's also about influence (micro influence). I'm not sure you're seeing the whole picture.
It sounds like conspiracy, but Obama and Cameron had "Nudge Units", and that was 5 years ago.
Have I just committed a crime?
I'm guessing courts will decide.
I hope so.
Experimenting on your customers is about as user hostile as you can get.
"It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".
It's not that the testing the conversion rates of button a versus button b is in and of itself immoral, it's that experimenting on people without their informed consent, under any circumstances, is. I'm intimately familiar with FBs platform as a developer and a user and its my intuition that 9/10 people aren't aware of the degree to which they are being experimented on via multivariate testing and I think a reasonable person would say they have a right to be informed of this.
Another note is that after years of using the platform I can tell that when non-technical people DO become aware of the fact that their experience using the application is sometimes fundamentally different from others because they're in a non-control bucket they generally react pretty negatively to the notion. Sure, some of this is the standard "users always hate every UI change no matter what it is" syndrome but I've noted a lot of "this is creepy and i wonder how much it's been happening before" which is, imo, a super legitimate response, and shouldn't be disregarded because its inconvenient for fb to get consent.
To me, this could definitely qualify as "psychological experiments" if it were intentional as you describe. Most likely a failed and useless experiment though, but that's due to the medium and the difficulty to implement correctly (how would you guarantee none of your greeters step out of line? What if you wanted to quickly evolve and modify the experiment?).
The fact is that it's much easier to run these sort of experiments on a web site than it is in meat space. It can also be much subtler and far more specific. It would be impossible to manipulate the variations in the real world as efficiently (or at all) like you can online.
The ability to actually do this stuff efficiently and at scale is pretty recent, and we ought to consider and deliberate over the consequences.
How about these: Are corner stores allowed to experiment with pricing? Are restaurants allowed to experiment with new menus? These are experiments involving humans. Are you just asking for poorly designed experiments?
What you're asking for is companies to launch once and never know if it worked. And indeed, software used to be like that, and it sucked...
Experiments and experiments on live non-consenting users are two different things.
> How about these: Are corner stores allowed to experiment with pricing? Are restaurants allowed to experiment with new menus? These are experiments involving humans. Are you just asking for poorly designed experiments?
Let a corner store charge different people different prices and let me know how far you get. The also have to deal with consequences fro their experiments, if a customer sees the price of an item has double in an experiment they're unlikely to come back, there's an asymmetry issue and not coming back is often not an option you have in an environment with lock-in and network effects.
> What you're asking for is companies to launch once and never know if it worked. And indeed, software used to be like that, and it sucked...
Yes, developers had to think through design decisions, stick to well defined HIG's and use controlled test groups, truly a dark age.
Well, under your proposal they can’t know how far they’ll get.
> Yes, developers had to think through design decisions, stick to well defined HIG's and use controlled test groups
Well, did they? To a greater extent than today?
But if it's a game, or a blog? Knock yourself out, no matter how big it is.
Which is to say, some tech companies think that A/B experiments that might lead someone to commit suicide is okay.
Knowing this is possible and how to measure the effect, lets them detect when they accidentally do it and reverse course.
Making it illegal to figure out the negative impacts of your decisions will make it harder to avoid them.
It would be much better to require disclosure when these negative impacts are detected and require that this information can only ever be used in the best interest of the user.
Just because the outcome of the research is potentially valuable doesn't mean it's ethical to conduct it on people, especially without their consent.
They explicitly created a situation to depress people, which could definitely increase the likelihood of suicide, particularly if they happened to randomly select someone who already was predisposed to that for other reasons.
I would argue someone at Facebook should've been brought up on criminal charges for this "experiment".
I'm assuming you mean in aggregate?
> They chose to expose some people to predominantly sad and depressing posts, which is not "what they're already being exposed to"
Can you source your "predominantly" here?
If their algorithm is operating randomly, it stands to reason that some amount of people will get a "predominantly" negative feed from time to time. So in this sense, some people were unwittingly being exposed to a predominately negative feed. So it seems reasonable to understand the results of this.
If their experiment resulted in people seeing negativity far beyond what is a possible outcome from their algorithm, then you might have a point about it being unethical.
What criminal law did they violate? Or are you saying there should have been a law against what they did?
I'm aware of only one case where someone was convicted of such a crime in the US without being physically involved in the death: Michelle Carter, who directly and repeatedly encouraged her boyfriend to kill himself, and goaded him into continuing what was ultimately a successful suicide attempt when he started to back out. Despite her active encouragement and unambiguous intent, the legal theory was controversial and the case has seen multiple appeals.
I find it quite unlikely that a court will accept the argument that intentionally making someone sad is the proximate cause of their death by suicide, even if done to a large number of people at the same time. Were that argument accepted, it could be applied to other situations affecting the emotions of many people just as easily, such as producing a sad song or movie.
Experimenting with customers if what gives you the information to make a better product.
It's the least user hostile thing you can get.
This is FAR different from product testing, say in the hardware world, where you tell people you want them to come test a product, or in the design world where you show them various things and quiz them on their feelings. In these situations they all know they're being tested on.
So no, this isn't "the least" user hostile thing you can do. Doing things without content is basically the prerequisite for hostility here.
Do you understand how often websites change without asking the user? Websites are constantly being updated, algorithms tweaked, features being added and taken away. You seem to be taking offense to the fact that they're providing a different experience to different subsets of the userbase? Is that what you're trying to ban? What could that possibly accomplish?
If you don't have A/B testing, then websites are just going to do it the old fashioned way: collect data, make the feature change, compare the data. What does this solve?
Further I would say that the parent post isn't even about this. It's about protecting the consumer and yes, I would go so far as to say that if the "change" that the websites want to do has violates the rights of the user then yeah they should be restricted in their ability to do so!
Why do users need to be explicitly informed of AB tests but not about other new gradual feature roll outs?
Frankly I think when you use a web site you are giving consent for your behavior on that site to be analyzed. I wouldn’t act indignant at traditional retailers attempting to learn from my shopping behavior in their stores so that they can improve their shopping experience. That’s just how businesses work.
Why? To make a physical analogy, you're on their property. You're in their store, walking around perusing their wares, using their tools, so of course they have the complete right to watch you.
There is no way to legislate this, your only option is to raise a stink about it and hope that they'll be more transparent in the future. You can't "require" companies to tell you how they're using your data. Once you've consented to your data being collected, that's it.
And, they do have consent to change the site at any point. The opposite would be for websites to never be allowed to do any kind of update because they didn't have user consent beforehand.
Does the homepage work better? Good experiment.
Can we manipulate people into feeling depressed or happy? Bad experiment.
"Better" for who? The website owners interests are rarely in alignment with my interests. They want increased sales, higher user engagement, etc. I want less engagement and the ability to make informed choices on products.
Let's say there's a product listing with it's list of features and this list has been extensively tweaked to maximize sales. As a result of that tweaking they took away a line item, that would have caused me to not buy it, say an annoying LED status indicator. This is good for the website owner but bad for me because I've lost the ability to make an informed decision. It's asymmetric manipulation and I'd regard it as immoral.
Edit. My response was about website a/b testing.
And how is that measured by most social media companies? "engagement".
And the goal is rarely to "improve my experience", it's to more efficiently manipulate me into spending more money, directly or indirectly.
Most people when they hear A/B testing think of something like switching the color of a button, or what have you. But when you start A/B testing a complex series of permutations of components in aggregate, specifically designed to target certain psychological profiles, you can actually start to learn a lot about the group you're testing.
To pretend that A/B testing is some simple "what image do you like more?" game is entirely disingenuous and is so characteristic of the attitude people in tech have when dealing with people or any kind of social side effects of systems they build.
We absolutely do need to be very careful with this. Careful with how we test people, careful with how the information is recorded, and careful with what we do with the data.
I'm going to start calling A/B testing "psychological side channel attacks" and maybe then places like HN will appreciate what's happening more.
Of course Y is usually something that makes the site owner money. Sometimes A, B, both, or the entire business model of the site is unethical. In such cases, they would still be unethical if a comparison test was not run. In cases where none of them are unethical, I have trouble imagining a realistic scenario in which the act of running a comparison test makes it unethical.
Not only do I not believe this, as most people on HN tend to have a very superficial understanding of whatever tech is being discussed, there is always a dismissal of any social consequences that may happen as a result of using any kind of technology.
>I have trouble imagining a realistic scenario in which the act of running a comparison test makes it unethical.
I can't tell if you're being serious or not. You don't need any kind of testing framework for a few famous examples to satisfy your "realistic" qualification. I see a lot of bland contrarian stuff on HN, but I'm kind of speechless right now.
There's a fairly strong case to be made that intentionally making a large number of people sad just to see if you can is unethical. There's a somewhat weaker case to be made that manipulating the happy group was also an unethical distortion of their reality. I fail to see an ethical problem with the fact that it was an A/B comparison. Instead, A and probably B would be unethical to attempt under any conditions without consent.
One of those moments I'm glad that our government is so broken at this point that it can't really drive significant change.
> U.S. Sens. Mark R. Warner (D-VA) and Deb Fischer (R-NE) have introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act.
A bipartisanship bill that would not allow the IRS to offer free tax filings:
The Defense of Marriage Act was also bipartisan:
COPA was bipartisan:
And I would dare say that it's probably one of the biggest contributors to all legislation.
Literally applies to every sector.
Fixing a problem sounds good, so it's easy to market. Defining what the problem is and avoiding unintended consequences is hard.
Lots of people have done it from Tim Wu, to Jaron Lanier to Clay Shirky to Tristan Harris. The problem definitions and solutions are almost 10 years old now.
It's just that YouTube, Facebook and Twitter didn't give a shit. Plus all kinds of side show debates about free speech, privacy, anonymity and half a dozen other things have taken focus away from the fundamental change social media introduced into the way humans as a group communicate.
That fundamental change is these systems attach a publicly visible number next to every thing anyone says.
Whether it is a like/view/upvote/click/follower/retweet count it has an effect on how people think and behave.
Whether it is the President, a Journalist or a 10 year old the numbers have an unasked for influence.
There is no good psychological/sociological reason for these numbers to be publicly visible in real time.
Engagement and Recommendation systems can still collect these numbers and continue to do their jobs without publicly displaying any of these numbers in real time or not at all.
These changes can't be rolled out instantly because a large part of the population are hooked and need to be slowly weaned off.
EDIT: Thanks to sbov (https://news.ycombinator.com/item?id=19620441) for doing the legwork the news isn't doing and digging up the text. It doesn't seem to mention addictive games at all; maybe that's another bill?
This seems to be a trend among recent articles.
How many online services have that many MAUs outside Facebook and Google? Even Twitter doesn't have that many in the US . Seems laser targeted at Facebook, but the abuses come from many many more companies.
Slack needs a serious spanking for how they handle notification opt outs. As far as I know there is no global mute everything switch which should exist.
Without that, the bill is more like a stick in the wheels for Facebook's future competitors.
Little guys shouldn’t be given a pass on following the rules.
I left my work iPhone 6 on while on vacation and returned to a dead phone. When I charged it up and powered it back on to be met with the "we're throttling your phone because of a power event" message. So I went on the adventure to nope out of that. Holy crap, Apple does not want you to turn it off once they've enabled it. And they apparently they enable it anyway they can because the phone battery is 95% capacity and it runs like a top.
The real issue is the incentives created by shareholder capitalism. While I would like that solved, that solution is so far away that I'll take whatever hacks I can in the meantime.