Hacker News new | past | comments | ask | show | jobs | submit login
U.S. senators introduce social media bill to ban 'dark patterns' tricks (reuters.com)
259 points by thefounder 15 days ago | hide | past | web | favorite | 152 comments



Not a single "news" article I can find links to the actual text of the bill. Here it is:

https://www.scribd.com/document/405606873/Detour-Act-Final


Thank you, was the first thing I looked for.

While the intention of the bill seems good, I worry that this will become the next cookie popup. From the text of the bill:

"Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis, but not less than once each 90 days, any experiments or studies that user was subjected to or enrolled in"

Based on their definitions of behavioral research, simple analytics would fall under the purview. This means that every site that has over 100m MAU will have to have a popup disclosing that they're running analytics and A/B testing (because honestly this won't stop any of them from doing it - these things are industry standards).

I don't need to be informed that Facebook tracks what links I click on their site. I don't need google to tell me that they have a history of every search I've made, and that they tailored those results based on my past searches. We're trying to create a safe space web at the detriment of UX. I support a lot of the stuff around younger kids, but I think the stuff for adults is just going to become a nuisance.


Given historical relevance it’s hard to imagine a time when someone would make a comment like “Regulations on human psychological experiments, what are we, a bunch of delicate flowers?!” But I have a hard time believing it would be seriously novel to conflate what goes on online with group psychological experiments. I’m not sure which side of the debate I would land on, but I do think it’s a reasonable enough thing for a deliberative body to consider, and certainly reasonable enough for a healthy debate without trying to sandbag it as “safe space”


Framing it as "human psychological experiments" evokes famous psych experiments like the marshmallow test and the Stanford Prison Experiment. Tech companies run these kinds of experiments occasionally (I believe Facebook is known to have tried to manipulate mood with a Newsfeed change), but I feel like this way of describing it masks the fact that 99% of large scale A/B tests are things like

- We slightly changed the color of the submit button

- The forward button was removed from the context menu on chat bubbles

- An infrastructure change reduces the load time of the comments on articles by 10ms

- The weekly ad relevance model update is being certified, yielding a 0.000001 increase in CTR for small segments of the market.

On average, much more mundane than "human psychological experiments".


In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.


> In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.

That's the point. It's like requiring a report to be filed whenever there is a "use of force" but then applying that rule using the Newtonian definition of force. Sat in your chair? File a report. Stand back up? File a report. Filed a report? File a report.

Worse, this kind of thing can happen retroactively. If you discover that your numbers are different than expected, but you hadn't declared any experiment, comparing what changed before and after is the experiment. But you hadn't notified those users that you were doing an experiment because you hadn't expected to have any reason to, so now you can't even have the people with the before and after data communicate with the people who know what changes were made to the system in that time frame because comparing that information would constitute doing the experiment.

It's like telling a car company they can't see their sales data when deciding which models to continue producing because it would constitute doing a psychological experiment on what kind of cars people like.

(On the other hand, it sounds like the law would only apply to entities the size of Facebook, and screw those guys in general. But it really is kind of a silly rule.)


We can all talk and be flippant about how trivial it is to change the colour of a button or whatever, but the sum total of all these changes is something different.

These services are running huge numbers of experiments in order to maximize engagement. Then everyone wonders what happened when tons of people on Facebook end up depressed and tons of people on YouTube end up radicalized by extremist rabbit holes.

It's death by a thousand cuts.


That's a separate problem though. The solution for that isn't to do something at the level of the individual experiments, it's to do something at the agglomeration level where the trivial individual harms are actually accumulating.

If you have some food which is infected with salmonella, you don't pick it apart with a microscope at the level of individual cells and try to separate it back out, you just throw the whole thing away and eat something else.

In this context the contaminated food is Facebook.


To continue with your analogy, Facebook is just one tainted chicken breast in the meat counter. We need to examine the entire meat packing and inspection infrastructure that gave rise to this mess.


> comparing what changed before and after is the experiment.

IIUC, in order to do that comparison you still need to collect data. You may throw that data away and your experiment ends right there, you may do analysis on that data, but you said it yourself - it is an experiment.


Right, so what are we trying to do here then? Having a notification that you're constantly participating in an open-ended experiment with a purpose to be determined at a future date seems worse than nothing. But if you require a more specific notification before the data is collected then the after the fact analysis doesn't just require user notification, it's inherently prohibited.


Yeah i’d expect the notification as an opt-in? Do you want to be part of that experiment ?


The experiment where they change the color of the submit button? What should cause me to care about that?

And what does opt-in even look like? No matter whether you want to "participate in the experiment" the submit button still needs to be some color for you, which is the only part of the "experiment" with any direct effect on you.

The concern with psychological experiments isn't that they're collecting data. That's a different bailiwick. The major issue with psychological experiments is that they may have significant direct psychological consequences. If you show people only news stories about mass shootings and conflict it may cause them to become violent or suicidal -- which has nothing to do with whether you collect data on it or what you do with it afterwards. The experiment itself is the harm.

Which means we would need some kind of principled and efficient way of distinguishing those kinds of "real" experiments from just measuring what happens when you make a subtle adjustment to a context menu.


Yes, and it's dumb. It's a bureaucratic nightmare that most likely inhibits progress. Not only that, this is also being used as a cudgel to silence the people that did the grievance studies hoax. [0]

[0] https://reason.com/blog/2019/01/07/peter-boghossian-portland...


Neither does it deliver a guarantee on the results you get out at the end - the Stanford experiment was faked, and academics have struggled to replicate the marshmallow test.


While I agree with your point, I do think that some kind of ethics oversight should happen over experiments like the ones you mentioned. I just think that it's absurd to expect the same from simple tests.


I thought marshmallow was replicated but clarified -- that it showed how children react to un/trusted adults, not uncover some genetic propensity to deferred rewards.


A/B testing is usually trivial and the disclosure will be trivial too. It does not need to be any burden to the readers (like a popup), and with a good tooling (and A/B testing requires tooling) it will not add much work to the developers too. It might be just automatically recorded in a public log. From time to time there will be something more interesting in that log and that will be the task of the media to analyse the logs and discover the interesting stuff. The disclosure just makes this work easier.


I think the hope is more along the lines that they'll have to tell you if they are running something like the emotional contagion study.

https://www.forbes.com/sites/gregorymcneal/2014/06/28/facebo...

https://www.theatlantic.com/technology/archive/2014/06/every...


The text of the law is what matters, and it's pretty clear that

> Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis ... any experiments or studies that user was subjected to or enrolled in

means that every A/B test needs to be disclosed.


No. Not every A/B test.

> (6) LARGE ONLINE OPERATOR

> The term "large online operator" means any person that—

> (A) provides an online service;

> (B) has more than 100,000,000 authenticated users of an online service in any 30 day period; and

> (C) is subject to the jurisdiction of the Commission under the Federal Trade Commission Act (15 U.S.C. 41 et seq.).

Presumably there are definitions of other terms in that sentence (e.g. experiments, studies).

Here, it is:

> BEHAVIORAL OR PSYCHOLOGICAL EXPERIMENTS OR RESEARCH—

> The term "behavioral or psychological experiments or research" means the study, including through human experimentation, of overt or observable actions and mental phenomena inferred from behavior, including interactions between and among individuals and the activities of social groups.

Honestly, I don't think this is clear enough. Person clicks BLUE instead of GREEN may or may not fall under this definition. I don't think it should, but if I have 100M+ authenticated users per month, I'm probably going to put up a notice anyway.


> Honestly, I don't think this is clear enough. Person clicks BLUE instead of GREEN may or may not fall under this definition. I don't think it should, but if I have 100M+ authenticated users per month, I'm probably going to put up a notice anyway.

I'm not sure why you think an A/B test is not covered by

> the study ... of overt or observable actions and mental phenomena inferred from behavior

but it seem to me to be the very thing targeted by this legislation. I agree that the end result: Google disclosing 100k A/B tests each quarter is a grotesque tax on private industry without any social gain. However, it doesn't strike me as terribly out of line from other legislation in its effect.


Define study. Define mental phenomena. Define behavior. I just don't picture the wording being clear enough that it wouldn't be scrutinized if some company failed to disclose seemingly benign (blue vs. green) A/B tests.

I'm not a lawyer though.


I am also not a lawyer, but I spend a great deal of time reading about these things and how they're handled in the courts.

Ambiguity in law is handled in at least two different ways: in criminal matters, ambiguity is read in the most favorable light for the defendant; in regulatory matters, the interpretation adopted by the regulatory agency responsible for the law's implementation is considered binding so long as it is "permissible construction" of the statute. The latter is commonly known as the "Chevron doctrine"[0]

The long and short of it is that this bill, if enacted, will mean whatever the Executive Branch says it means. If it's particularly egregious, then their interpretation will be challenged in court and perhaps eventually trimmed down a bit.

0: https://en.wikipedia.org/wiki/Chevron_U.S.A.,_Inc._v._Natura....


At least this has a reasonable lower limit, so it doesn't screw over small platforms.


A simple disclosure, akin to the GDPR disclosure, doesn't seem unreasonable. Meatspace psychological tests would require informed consent and IRB approval. I agree that a more nuanced definition of what requires disclosure would be preferable (e.g. link tracking and testing button colors seems benign, messing with peoples moods with different content algorithms is not).


GDPR disclosure is atrocious. We literally legislated pop ups back into websites and mandated that they basically exist.

I'd rather they not ask me anything and just assume consent. It's not like EU legislation is going to stop a Chinese company anyway.


Assume consent...for what, exactly? To note which links on their site you clicked, and which you ignored - OK, that may be reasonable. To share/sell that information to a multitude of other businesses, most of which you've never heard of? I'm not so sure. To follow your activity across the rest of the internet for the following month, and sell access to that data? No thanks.


When you see a popup about cookies on a site, what do you do? Read the agreement, close the site, or just click "Allow"? Absolute majority of people simply click "Allow".

But even for the people who want to read the agreement, it would be much better if this was implemented as a browser feature, giving users control and consistency, instead of different popups on each site.


If I get a popup just saying that a site uses cookies, I may well allow it (knowing that my browser will clear the cookies when I close my incognito session, perhaps).

If I get a popup listing various kinds of data collection that the site wants to do, and lists of "trusted partners" it will be shared with, etc., I generally refuse everything except "essential". If the site's idea of what is "essential" sounds excessive compared to the use I expect to make of it (just how much tracking is reasonably required in order to read an article?), I simply won't use it.

And if it makes the process of refusing consent particularly opaque or cumbersome (in violation of GDPR requirements), I certainly won't trust or use the site at all (I'm looking at you, Oath...)


Does this apply then to telcoms too I wonder?


Disclose here should not mean opening a popup, it should be a separate page where user can see what data have been collected and what did the algorithm decide based on that data.

Similarly GDPR should have been about a requirement to provide s page listing all of the user data, not about endless popups on every site asking to accept cookies. If i use a browser that supports cookies then i already accept them.


I wonder why people disagree with this, do people think that a popup is better than a settings page where user can see what data was gathered and how it was used, or my point about accepting cookies being a browser feature, seemed to some as an attack on privacy in general?


Why is law so inaccessible?

I've always thought about trying to scrape government websites for law changes and drafts, shift them onto github and give the respective politicians clearly labelled pseudo accounts in an organisation in which they all vote on pull requests.


If you take a look at different states online legislation you will see the level of tech is very poor. I would say it's around 2000-2001 standards even if the platforms are newer.


> The bill would bar companies from choosing groups of people for behavioral experiments unless the companies get informed consent.

Wait, does this refer to A/B testing?


Thanks to sbov for posting the text of the doc here, https://www.scribd.com/document/405606873/Detour-Act-Final, and my take on this is (and IANAL), well, yes, most definitely it would apply to A/B testing in certain circumstances.

Many types of A/B tests are designed to increase conversion - to get a user to buy something, or signup, etc. I have personally (and I'm sure lots of folks on this site) been involved in A/B tests that specifically test what many would consider "dark patterns" to increase conversion.

Just take a look at Booking.com, which is famous for their A/B testing. Right now I get a popup banner when I hit that which says "Welcome back! It's always a pleasure to see you! Sign in to see deals of up to 50% off." I guarantee the text in that banner has been A/B tested 9 ways to Sunday. I'd even bet they tested the percentage amount (i.e. whether it was 50%, 30% etc.) Of course "up to 50%" could mean 0%, which it probably is in most cases. And the whole purpose of that banner is to get you to authenticate and sign in, so they can track you better.

So yes, it most definitely will apply to certain forms of A/B testing. That also appears to be the point.


If an A/B test leads a site to adopt bad shit, then ban the bad shit. Don't ban testing.


so, users would much rather get ads they aren't interested in? the point of a/b testing is to show you something you want to see and might be interested in purchasing.

This is a good thing


the point of a/b testing is to show you something you want to see

Not always. A/B testing is the reason that we now have weapons-grade clickbait headlines, and those terrible little grids of ads at the bottom of blog posts. Neither of those are good things.


And you think we couldn't get to that point without A/B testing? Yellow journalism has been around for much longer than websites.


If they kill A/B testing then probably a generation of startups that modeled themselves after Google and their diaspora won't know how to design products anymore.


Good riddance! It’s high time companies stop gravitating to the local maxima for every decision. As a user, I want thoughtfully developed experiences; not everything has to be a news feed.


A/B testing is an important super basic step to improving the user experience. Without it how would you know what users are looking for? It's important to test the right factors though. I can see it being done wrong and winding up detrimental to the user experience, but not doing it all is definitely not a solution.


How essential is it though really? How did we ever manage before it became a thing?

Obviously a good number of A/B tests are pretty innocent, but if it's non-trivial to differentiate between them and https://en.wikipedia.org/wiki/Nudge_theory then I'm 100% for completely ditching A/B tests.


How is A/B testing anything other than the scientific method with a control and a variable?

The answer is: modern science has always used a form of A/B testing.


I think the medium, data collection, and scale matter. It's never been so affective or efficient as it is now (and will become).

Gathering data from a million people on which shade of red makes them more likely to click a button is entirely different today due to the scale, how cheap it is to setup, and how cheap it is to tweak. This data can then be used to "nudge" people towards a direction that you benefit from (and they may or may not benefit from, and society at large may or may not benefit from). At scale, these very small nudges can have an impact. The unregulated methods we use for this keep improving (AI).

Not to throw shade, but there's a reason why Amazon has been hiring behavioural psychologists. We should be aware and thinking about this.


I disagree in that I feel the higher effectiveness results in better UI.

Perhaps we need to simultaneously inform people through better education at the same time for how to resist the urge to spend borrowed money whenever possible?


Incentives of the publisher and the consumer aren't always aligned. Publisher might want you to spend / use the mobile app (tracking) / budge your political leaning / confuse you with disinformation / etc. The consumer / user is simply outgunned, and it's getting more and more lopsided. Regulation is inevitable.

This isn't just about good UI. Not everyone is using these sort of behavioural tests to present a better UI. It's also about influence (micro influence). I'm not sure you're seeing the whole picture.

It sounds like conspiracy, but Obama and Cameron had "Nudge Units", and that was 5 years ago.

http://freakonomics.com/podcast/white-house-gets-nudge-busin...

https://www.warc.com/newsandopinion/opinion/how_the_likes_of...

https://www.forbes.com/sites/beltway/2015/09/16/obama-nudge-...


Before we used A/B testing with paper and a writing tool no doubt.


I think it's good policy and will also be really funny, which is why I think it should become law. It'll force Silicon Valley to learn empathy overnight.


It'll also lead to even shittier UI because you can't test your actual users for it.


About time!


How would this even work? Say I decided to buy two billboards with different designs selling the same product, and put a different phone number at the bottom of each. Looking at my phone bill at the end of the month, I have a count of how many responses I received from each billboard.

Have I just committed a crime?



> BEHAVIORAL OR PSYCHOLOGICAL EXPERIMENTS OR RESEARCH—

> The term "behavioral or psychological experiments or research" means the study, including through human experimentation, of overt or observable actions and mental phenomena inferred from behavior, including interactions between and among individuals and the activities of social groups.

I'm guessing courts will decide.


> Wait, does this refer to A/B testing?

I hope so.

Experimenting on your customers is about as user hostile as you can get.


Are you even serious? We're not talking about A/B testing prescription drugs with placebos. We're talking about testing different images. Different colors for buttons.

Settle down.


Very serious. Where does it stop?

"It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".

https://www.theguardian.com/technology/2014/jun/29/facebook-...


That’s not A/B testing, or at least not just A/B testing, that’s a full blown psych experiment.


But it's really not a one-off. This has become modern day marketing tactics. Guarantee someone gave a presentation today to a bunch of execs about how to manipulate a percentage of your users to achieve [x] goal by lightly "nudging" them.


Saying that A/B testing is just different colors for buttons is intentionally ignoring the past 10 years of facebooks development process. Every single aspect of the platform is AB tested and that platform has a big effect on peoples lives.


I'm still confused about why I'm supposed to be upset that Facebook A/B tests their features on their users. It seems to me that if they're allowed to do either A or B, they're allowed to measure the influence of A vs B. I don't see where the outrage is.


You shouldn't be upset about that. You should be upset that Facebook is performing tests on its users to optimize against the interests of those same users, without letting the user know what they're doing.


As I said in another comment this is about consent.

It's not that the testing the conversion rates of button a versus button b is in and of itself immoral, it's that experimenting on people without their informed consent, under any circumstances, is. I'm intimately familiar with FBs platform as a developer and a user and its my intuition that 9/10 people aren't aware of the degree to which they are being experimented on via multivariate testing and I think a reasonable person would say they have a right to be informed of this.

Another note is that after years of using the platform I can tell that when non-technical people DO become aware of the fact that their experience using the application is sometimes fundamentally different from others because they're in a non-control bucket they generally react pretty negatively to the notion. Sure, some of this is the standard "users always hate every UI change no matter what it is" syndrome but I've noted a lot of "this is creepy and i wonder how much it's been happening before" which is, imo, a super legitimate response, and shouldn't be disregarded because its inconvenient for fb to get consent.


Consent only applies for things you wouldn't be allowed to do without consent in the first place. What if Walmart decided to have the greeters at half of their stores be rude to customers and compare sales numbers? Would that require advance consent? Clearly it wouldn't because there is no law against bad service. The fact that the click whores who call themselves journalists (who are also competitors of FB) call it "psychological experiments" to scare non-technical people is irrelevant.


"What if Walmart decided to have the greeters at half of their stores be rude to customers and compare sales numbers? Would that require advance consent? "

To me, this could definitely qualify as "psychological experiments" if it were intentional as you describe. Most likely a failed and useless experiment though, but that's due to the medium and the difficulty to implement correctly (how would you guarantee none of your greeters step out of line? What if you wanted to quickly evolve and modify the experiment?).

The fact is that it's much easier to run these sort of experiments on a web site than it is in meat space. It can also be much subtler and far more specific. It would be impossible to manipulate the variations in the real world as efficiently (or at all) like you can online.

The ability to actually do this stuff efficiently and at scale is pretty recent, and we ought to consider and deliberate over the consequences.


Feature experiments are also a thing that exists. I want to deploy a new widget, and need to check that it works, and hasn't done something unexpected that drives users away. Experiments are how you do it.

How about these: Are corner stores allowed to experiment with pricing? Are restaurants allowed to experiment with new menus? These are experiments involving humans. Are you just asking for poorly designed experiments?

What you're asking for is companies to launch once and never know if it worked. And indeed, software used to be like that, and it sucked...


> Feature experiments are also a thing that exists. I want to deploy a new widget, and need to check that it works, and hasn't done something unexpected that drives users away. Experiments are how you do it.

Experiments and experiments on live non-consenting users are two different things.

> How about these: Are corner stores allowed to experiment with pricing? Are restaurants allowed to experiment with new menus? These are experiments involving humans. Are you just asking for poorly designed experiments?

Let a corner store charge different people different prices and let me know how far you get. The also have to deal with consequences fro their experiments, if a customer sees the price of an item has double in an experiment they're unlikely to come back, there's an asymmetry issue and not coming back is often not an option you have in an environment with lock-in and network effects.

> What you're asking for is companies to launch once and never know if it worked. And indeed, software used to be like that, and it sucked...

Yes, developers had to think through design decisions, stick to well defined HIG's and use controlled test groups, truly a dark age.


> Let a corner store charge different people different prices and let me know how far you get.

Well, under your proposal they can’t know how far they’ll get.

> Yes, developers had to think through design decisions, stick to well defined HIG's and use controlled test groups

Well, did they? To a greater extent than today?


Explicit consent is already given for feature changes just by using the site. How does the act of gathering scientifically valid information on those features substantively change the dynamic such that extra consent is required? It doesn't seem to me that it does.


These mega websites should probably be held to a different standard to the rest. No ones life is changing when I try out different colors but some of the stuff facebook is testing is very unethical.


I think the standard shouldn't be size, but type of software. Facebook is a platform. People expect (reasonably or not) some element of stability in a platform. I don't want even a small platform doing tests on me and my data.

But if it's a game, or a blog? Knock yourself out, no matter how big it is.


I agree with you but personally I dont see how its so onerous for blizzard or rockstar to tell me in plain language what it intends to do with its behavior tracking (or really that its tracking my behavior at all.) For me this is about consent, and I'm willing to consent to things that I'm made aware of. I mean, I'm a software developer too, I know there are legitimate use-cases here.


This: https://www.nytimes.com/2014/06/30/technology/facebook-tinke...

Which is to say, some tech companies think that A/B experiments that might lead someone to commit suicide is okay.


The alternative is to not know at all that this is possible and accidentally design something that makes people more negative.

Knowing this is possible and how to measure the effect, lets them detect when they accidentally do it and reverse course.

Making it illegal to figure out the negative impacts of your decisions will make it harder to avoid them.

It would be much better to require disclosure when these negative impacts are detected and require that this information can only ever be used in the best interest of the user.


Please find me one real scientist who would argue it's ethical to encourage some people to commit suicide to find out how to avoid encouraging people to commit suicide.

Just because the outcome of the research is potentially valuable doesn't mean it's ethical to conduct it on people, especially without their consent.


More accurately, expose people in a controlled setting to what they're already being exposed to, to find out its impact. But this sounds pretty neutral.


I wholly disagree. Facebook only shows some posts algorithmically, and emotionally, they were probably more or less neutral. They chose to expose some people to predominantly sad and depressing posts, which is not "what they're already being exposed to", and it's without the more positive posts to balance it out.

They explicitly created a situation to depress people, which could definitely increase the likelihood of suicide, particularly if they happened to randomly select someone who already was predisposed to that for other reasons.

I would argue someone at Facebook should've been brought up on criminal charges for this "experiment".


>Facebook only shows some posts algorithmically, and emotionally, they were probably more or less neutral.

I'm assuming you mean in aggregate?

> They chose to expose some people to predominantly sad and depressing posts, which is not "what they're already being exposed to"

Can you source your "predominantly" here?

If their algorithm is operating randomly, it stands to reason that some amount of people will get a "predominantly" negative feed from time to time. So in this sense, some people were unwittingly being exposed to a predominately negative feed. So it seems reasonable to understand the results of this.

If their experiment resulted in people seeing negativity far beyond what is a possible outcome from their algorithm, then you might have a point about it being unethical.


> I would argue someone at Facebook should've been brought up on criminal charges for this "experiment".

What criminal law did they violate? Or are you saying there should have been a law against what they did?


Criminal Negligence/Manslaughter. You don't have to break a specific law, you just you have to be culpable in something that could foresee-ably have a reasonable chance for someone to be harmed.


Criminally negligent homicide, and manslaughter are specific laws. Conviction generally requires proof that the subject's actions were the proximate cause (a legal term with its own case law) of the victim's death.

I'm aware of only one case where someone was convicted of such a crime in the US without being physically involved in the death: Michelle Carter, who directly and repeatedly encouraged her boyfriend to kill himself, and goaded him into continuing what was ultimately a successful suicide attempt when he started to back out. Despite her active encouragement and unambiguous intent, the legal theory was controversial and the case has seen multiple appeals.

I find it quite unlikely that a court will accept the argument that intentionally making someone sad is the proximate cause of their death by suicide, even if done to a large number of people at the same time. Were that argument accepted, it could be applied to other situations affecting the emotions of many people just as easily, such as producing a sad song or movie.


Indeed. And note that these laws are generally state laws in the US, not federal. So just in the US, there's a lot of possible versions of the law/jurisdiction Facebook could face.


I didn't think if that, the jurisdiction would be where the person died, so potentially anywhere in the world with an extradition treaty could charge them.


If their feature A is OK to do in isolation, and could conceivably lead someone to commit suicide, then I don't see the problem with measuring that impact vs some feature B.


Absolutely everything in business is an experiment.

Experimenting with customers if what gives you the information to make a better product.

It's the least user hostile thing you can get.


This seems to be ignoring consent. I'd bet you if I went around asking people whether or not they realized their FB app experience was being consistently multivariate tested i'd be on the street a while before someone said yes.

This is FAR different from product testing, say in the hardware world, where you tell people you want them to come test a product, or in the design world where you show them various things and quiz them on their feelings. In these situations they all know they're being tested on.

So no, this isn't "the least" user hostile thing you can do. Doing things without content is basically the prerequisite for hostility here.


> Doing things without content is basically the prerequisite for hostility here.

Do you understand how often websites change without asking the user? Websites are constantly being updated, algorithms tweaked, features being added and taken away. You seem to be taking offense to the fact that they're providing a different experience to different subsets of the userbase? Is that what you're trying to ban? What could that possibly accomplish?

If you don't have A/B testing, then websites are just going to do it the old fashioned way: collect data, make the feature change, compare the data. What does this solve?


Look this clearly is not in the spirit of the argument I'm making. Why would I advocate for rules that restrict a websites ability to change? Again, im talking about consent when it comes to how your behavior is going to be used.

Further I would say that the parent post isn't even about this. It's about protecting the consumer and yes, I would go so far as to say that if the "change" that the websites want to do has violates the rights of the user then yeah they should be restricted in their ability to do so!


No company in any domain rolls out products globally all at once. McDonalds introduces new menu items in test markets, tv shows start as pilots, software is deployed gradually and as it’s deployed it’s usually measured and rolled back if it’s not working as expected. AB tests are hardly any different. Smart companies experiment.

Why do users need to be explicitly informed of AB tests but not about other new gradual feature roll outs?

Frankly I think when you use a web site you are giving consent for your behavior on that site to be analyzed. I wouldn’t act indignant at traditional retailers attempting to learn from my shopping behavior in their stores so that they can improve their shopping experience. That’s just how businesses work.


> Again, im talking about consent when it comes to how your behavior is going to be used.

Why? To make a physical analogy, you're on their property. You're in their store, walking around perusing their wares, using their tools, so of course they have the complete right to watch you.

There is no way to legislate this, your only option is to raise a stink about it and hope that they'll be more transparent in the future. You can't "require" companies to tell you how they're using your data. Once you've consented to your data being collected, that's it.


It is very rare to see this happening with existing features that the users use and like.

And, they do have consent to change the site at any point. The opposite would be for websites to never be allowed to do any kind of update because they didn't have user consent beforehand.


Basically it depends on the experiment.

Does the homepage work better? Good experiment.

Can we manipulate people into feeling depressed or happy? Bad experiment.


> Does the homepage work better? Good experiment.

"Better" for who? The website owners interests are rarely in alignment with my interests. They want increased sales, higher user engagement, etc. I want less engagement and the ability to make informed choices on products.

Let's say there's a product listing with it's list of features and this list has been extensively tweaked to maximize sales. As a result of that tweaking they took away a line item, that would have caused me to not buy it, say an annoying LED status indicator. This is good for the website owner but bad for me because I've lost the ability to make an informed decision. It's asymmetric manipulation and I'd regard it as immoral.


The goal is to improve their experience, that's the complete opposite of hostility.

Edit. My response was about website a/b testing.


>The goal is to improve their experience

And how is that measured by most social media companies? "engagement".


For these companies "improve their experience" = optimizing for maximum time spent with the product even when it's not in the best interest of the people using the product. i.e. Netflix autoplay, algorithmic newsfeed that tends to show more outrage-inducing content, and artificial notifications that aren't from people you know, but are engineered to get you back into the product unnecessarily. They just want to capture as much of your attention as possible because it increases the number of ads they can show/sell. That's not improving user experience, that's hijacking attention to maximize profit. Considering attention is the primary instrument nature has given us for crafting our lives, that's a pretty user-hostile thing to do.


Changing random stuff all the time is not improving my experience, giving me a consistent interface is.

And the goal is rarely to "improve my experience", it's to more efficiently manipulate me into spending more money, directly or indirectly.


I hope everyone down voting this realizes that A/B testing has an "infinite" resolution.

Most people when they hear A/B testing think of something like switching the color of a button, or what have you. But when you start A/B testing a complex series of permutations of components in aggregate, specifically designed to target certain psychological profiles, you can actually start to learn a lot about the group you're testing.

To pretend that A/B testing is some simple "what image do you like more?" game is entirely disingenuous and is so characteristic of the attitude people in tech have when dealing with people or any kind of social side effects of systems they build.

We absolutely do need to be very careful with this. Careful with how we test people, careful with how the information is recorded, and careful with what we do with the data.

I'm going to start calling A/B testing "psychological side channel attacks" and maybe then places like HN will appreciate what's happening more.


I think people on HN generally have a good understanding of what A/B testing is: "if I change X, will users do more of Y?"

Of course Y is usually something that makes the site owner money. Sometimes A, B, both, or the entire business model of the site is unethical. In such cases, they would still be unethical if a comparison test was not run. In cases where none of them are unethical, I have trouble imagining a realistic scenario in which the act of running a comparison test makes it unethical.


>I think people on HN generally have a good understanding of what A/B testing is

Not only do I not believe this, as most people on HN tend to have a very superficial understanding of whatever tech is being discussed, there is always a dismissal of any social consequences that may happen as a result of using any kind of technology.

>I have trouble imagining a realistic scenario in which the act of running a comparison test makes it unethical.

I can't tell if you're being serious or not. You don't need any kind of testing framework for a few famous examples to satisfy your "realistic" qualification. I see a lot of bland contrarian stuff on HN, but I'm kind of speechless right now.


I'm entirely serious. To cite one of the most famous examples, Facebook ran a sort of A/B test where its algorithm was adjusted to attempt to make users happy or sad.

There's a fairly strong case to be made that intentionally making a large number of people sad just to see if you can is unethical. There's a somewhat weaker case to be made that manipulating the happy group was also an unethical distortion of their reality. I fail to see an ethical problem with the fact that it was an A/B comparison. Instead, A and probably B would be unethical to attempt under any conditions without consent.


It's as if millions of "Only 2 left!!" banners cried out in terror and were suddenly silenced.


Zero confidence that any part of the US congress is even remotely capable of producing a bill for this that isn't a complete disaster. Unfortunately, they'll probably try anyway.

One of those moments I'm glad that our government is so broken at this point that it can't really drive significant change.


Congress produces pretty decent bills when they are bipartisan. Most recent example is the criminal justice reform.


This bill is, or at least appears to be, bipartisan.

> U.S. Sens. Mark R. Warner (D-VA) and Deb Fischer (R-NE) have introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act.

https://www.scribd.com/document/405606873/Detour-Act-Final


And I feel better knowing that these two representatives haven’t been in the news for their views or for scandals. I’ll have to read the bill to know better.


This was just on the front page of HN a day ago:

A bipartisanship bill that would not allow the IRS to offer free tax filings:

https://news.ycombinator.com/item?id=19613725

The Defense of Marriage Act was also bipartisan:

https://en.wikipedia.org/wiki/Defense_of_Marriage_Act

COPA was bipartisan:

https://en.wikipedia.org/wiki/Child_Online_Protection_Act


PATRIOT Act was also essentially bipartisan (while most opposition was from Dems, more Dems voted for it than against it).


They were afraid of being labelled unpatriotic. This was during the time where Senator Max Cleland was attacked for being "unpatriotic" even though he lost three limbs in the Vietnam War


Fundamentally, it's about the fear of not getting re-elected.

And I would dare say that it's probably one of the biggest contributors to all legislation.


Criminal Justice Reform bill is the masterwork of Jared Kushner. Kushner is the reason why it exists at all, so it wasn't really "congress" that "produced" it.

http://time.com/5486560/prison-reform-jared-kushner-kim-kard...


Did Kushner write it?


We'll never know. But he _made them_ pass it, that much is certain.


> Zero confidence that any part of the US congress is even remotely capable of producing a bill for this that isn't a complete disaster.

Literally applies to every sector.


I'm pessimistic on this bill passing- Silicon Valley has also reached its fingers into lobbying. See: https://www.theguardian.com/technology/2017/sep/03/silicon-v...


Silicon Valley started going to congress back in the late 80's in response to East Asian (Japanese in particular I think) competition in semiconductors.


Government (and its cheerleaders) has the same solution to every problem: ban it, or declare that it can't exist. Which is the same solution a child could come up with.

Fixing a problem sounds good, so it's easy to market. Defining what the problem is and avoiding unintended consequences is hard.


Its not that complicated to define.

Lots of people have done it from Tim Wu, to Jaron Lanier to Clay Shirky to Tristan Harris. The problem definitions and solutions are almost 10 years old now.

It's just that YouTube, Facebook and Twitter didn't give a shit. Plus all kinds of side show debates about free speech, privacy, anonymity and half a dozen other things have taken focus away from the fundamental change social media introduced into the way humans as a group communicate.

That fundamental change is these systems attach a publicly visible number next to every thing anyone says.

Whether it is a like/view/upvote/click/follower/retweet count it has an effect on how people think and behave.

Whether it is the President, a Journalist or a 10 year old the numbers have an unasked for influence.

There is no good psychological/sociological reason for these numbers to be publicly visible in real time.

Engagement and Recommendation systems can still collect these numbers and continue to do their jobs without publicly displaying any of these numbers in real time or not at all.

These changes can't be rolled out instantly because a large part of the population are hooked and need to be slowly weaned off.


Silicon Valley has the same solution to every problem: more technology, or market forces. Everybody plays the game they know how to play.


I think what you say here carries a lot of meaning into how parents raise children. Kid is not doing something you like, or not doing something you want it to do? No! Don't do that, or Do this! Why? Because I said so. No TV tonight for you! But, why do we not think to negotiate, and to reason? When we apply for a job, we negotiate with our employer for a salary that benefits us both, right? Thats how it should be. What do companies want? What do users want? What is right, what is wrong. I am not sure that any situation can be found involving two or more parties that differs.


Instead of actually thinking of how the system works or how to change it, we get this stupid law from stupid people. Classic government.


Thank god, finally we'll be able to regulate user interface design. Surely needing to get your designs approved by the department of web design standards will be good for innovation.


Why doesn't the article excerpt any language from the bill, or otherwise link to it? It's hard to imagine a bill that defines, to an enforceable legal standard, what is a dark pattern, and what is an addictive game (another thing that the article mentions is banned).

EDIT: Thanks to sbov (https://news.ycombinator.com/item?id=19620441) for doing the legwork the news isn't doing and digging up the text. It doesn't seem to mention addictive games at all; maybe that's another bill?


> Why doesn't the article excerpt any language from the bill, or otherwise link to it?

This seems to be a trend among recent articles.


Funny that Facebook is for this, but it makes sense. It would stop any startups from having the same advantage as them in regards to sketchy growth practices.


The text of the bill specifies that it only applies to companies with at least 100 million monthly active users.


I wish a bill would also ban the cancer that is retargeting. I’m sure I’ll get downvoted for saying this as many entrepreneurs and SaaS companies browsing are making their conversions on retargeting, but something has to be said about the creepy nature of following people wherever they go without obtaining direct consent.


This seems fraught with first amendment issues. Could a lawyer explain how this could pass & hold up in court?


Courts have held that you can put a lot more restrictions on commercial speech: https://en.wikipedia.org/wiki/Commercial_speech


It's refreshing to see bipartisan efforts like this coming out of the current political landscape.


This would only apply to online services with > 100 million active users. And is it safe to assume that means 100 million Americans? Not sure how we'd apply to the law to users outside the US.

How many online services have that many MAUs outside Facebook and Google? Even Twitter doesn't have that many in the US [1]. Seems laser targeted at Facebook, but the abuses come from many many more companies.

[1] https://www.statista.com/statistics/274564/monthly-active-tw...


I don't know. Facebook already has everybody's data. Unless this bill will retroactively force them to delete it unless users explicitly consent to letting them keep it, it will restring new growing companies more.


Would love to see LinkedIn reigned in with their false notifications. They’ll show one of those red dots then when you click it nothing new has happened. There are a ton of other terrible dark patterns used by LinkedIn but I forget to take note when I encounter them.

Slack needs a serious spanking for how they handle notification opt outs. As far as I know there is no global mute everything switch which should exist.


The bill is useless, unless it also requires to reacquire consent for all the data, that was already taken using these patterns. It should demand Facebook and the likes to get consent to keep the data, or remove it within some reasonable time.

Without that, the bill is more like a stick in the wheels for Facebook's future competitors.


I would not limit it to “large” online operators. Just make it a rule governing all websites hosted or otherwise based in the United States.

Little guys shouldn’t be given a pass on following the rules.


It's called regulatory capture and forget that if Facebook, which grew to the size it is at because in part of A/B testing, gets to now lobby for rules to make it more difficult for newcomers to do it. We used to have anti-trust legislation in the United States (we still do, but hardly use it). Special regulations for the behemoth corporations is standard jurisprudence.


I sure hope your job doesn't involve anything that is related to websites. Regulatory capture of this kind will utterly kill small website operators.


Assuming they do this right, I mean they won't, but assuming they did, I'd like to see something similar for Operating Systems.

I left my work iPhone 6 on while on vacation and returned to a dead phone. When I charged it up and powered it back on to be met with the "we're throttling your phone because of a power event" message. So I went on the adventure to nope out of that. Holy crap, Apple does not want you to turn it off once they've enabled it. And they apparently they enable it anyway they can because the phone battery is 95% capacity and it runs like a top.


This is just a hack and doesn't solve the real issues.


> This is just a hack and doesn't solve the real issues.

The real issue is the incentives created by shareholder capitalism. While I would like that solved, that solution is so far away that I'll take whatever hacks I can in the meantime.


The real issue is incentives created by advertising and people not paying for products because we've become disillusioned to being the product.


If they ban all dark patterns but also prevent the federal government from providing free digital filling of tax returns, I am ok with it...


This legislation sounds so broad based on the article. Unless you know what youre doing legislation can be so broad that people wind up in court for years and no Cases are filed in civil court. I cant imagine it being effective in any way. With that said I havent read the details of the actual law so maybe it is solid.


It's about time. The amount of children already addicted to screens is terrifying


The addiction extends beyond age


Not arguing that, but it's arguably more damaging at such an early age when the brain is developing


How about we conduct (and replicate!) the studies first, then make the laws?


dark patterns like offering health insurance...


Good!


Insert "dark pattern expert"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: