So, when is the FTC going to actually bring down the hammer on FB for violating the consent agreement? There's no way this was "unintentional."
At $40,000 per user per day [1], even at just one day of violation, that's a $60 billion fine FB should be liable for. "Under the settlement, Facebook agreed to get consent from users before sharing their data with third parties," so this seems to be EXACTLY in violation of that agreement.
*Edit: on second thought, it should be even higher, as each of the 1.5M users had multiple contacts uploaded. So, for example, let's say 1 user had 150 contacts who were not part of the other 1.5M users who had contacts uploaded. That alone should be a violation of the consent rights of those 150 people, so $6 million per day. If every one of the 1.5 million people had, on average, 150 contacts exclusive of the other 1.5 million people who had contact info uploaded, that's a $9 trillion liability for one day of violation.
The FTC has been toothless on this for quite some time now, so I'm expecting no significant action as FB lawyers will defend that no one had data shared with "third parties," technically. Well, shouldn't my contact info shared by a friend with FB be a consent violation as FB is a "third party" from my perspective?
Maybe I'm just ignorant, but I do not really see how this violates the FTC agreement, because it covers Facebook sharing user data (stored/tracked/gathered by Facebook) with third parties.
However, what Facebook did is far worse than violating that agreement. Facebook gained accessed to user data on third party systems, to which they should never have had access. They gained this (unauthorized) access (at best without clear consent) on a false pretense (disguising as security related requirement). Then they imported user data, with no relationship to their stated goal/requirement, into their platform.
Associative contact information is a highly valuable commodity to any company involved in marketing and social media. I've seen a lot of people argue how this could have been the result of a laps of oversight, but that sounds like arguing how a gem stone trader might have "accidentally" stolen a large quantity of rough gem stones, while claiming to not have known their value. Even if theoretically possible, it's extremely unlikely that nobody within Facebook knew/realized the value of this data.
Either way, Facebook gained access to highly valuable assets. Even in the unlikely event of sincere lack of oversight, it would demonstrate a level of incompetence that warrants them to still be held criminally liable.
Moreover, Facebook might actually have outright violated the Computer Fraud and Abuse Act (CFAA), in particular the "access in excess of authorization" part, but I'm not sure.
The FTC complaint lists a number of instances in which Facebook allegedly made promises that it did not keep:
In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public.
They didn't warn users that this change was coming, or get their approval in advance.
Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need.
Facebook told users they could restrict sharing of data to limited audiences – for example with "Friends Only." In fact, selecting "Friends Only" did not prevent their information from being shared with third-party applications their friends used.
Facebook had a "Verified Apps" program & claimed it certified the security of participating apps. It didn't.
Facebook promised users that it would not share their personal information with advertisers. It did.
Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn't.
In my opinion, Facebook no doubt violated the FTC agreement and pretty much every other promise they ever made. However, that's a different matter. I do not argue against them being punished for that, but this situation appears to be a different kind of case. A violation of the law to be precise, and should be publicly prosecuted accordingly.
What user do, or should do, isn't really a part of this argument. This is about Facebook violating agreements they made with governments or them violating laws. This is a subject of public prosecution. How users should respond is a different matter.
While dropping Facebook and Instagram is easy, WhatsApp is much harder. It's become the main communication medium for many, including a lot of group chats.
While you can abstain, it creates all kinds of awkwardness that I'm not willing to deal with currently.
This is a federal crime as FB is accessing a system that it does not have permission to access. This is the same as you or me hacking into someone else's email. People have gone to jail for a long time for just this crime.
I agree, or so it at least appears. Now we will have to wait and see if the US government (and those of other countries) will actually go after Facebook. They might just do whatever they can to avoid it, either because of their own personal interests (maybe even a corrupt nature), or Facebook's reach might actively bully them out of it (away from the public eye, of course). I don't expect anyone substantial facing consequences for these actions, which says enough in its own right.
And our low expectations are part of the problem. We assume now that large corporations should be able to get away with behavior that would send anyone else to jail.
Also Let’s see a list of the various FTC settlements with FB. And a list of FTC employees who worked on those settlements now working for big tech.
I know one FTC employee who worked on the 2011 FTC/FB settlement (which required FB to obtain independent 3rd party audits certifying their privacy program for 20 years...never mind the subsequent violations and settlements) is now “head of privacy” for a certain social networking company.
FB's public comments about these remind me a lot of the "5 Standard Excuses" scene in the '80s BBC sitcom Yes Minister, where a civil servant lists the best CYA mea culpas for politicians to use when something goes wrong.
1. It occurred before certain important facts were known, and couldn’t happen again
2. It was an unfortunate lapse by an individual, which has now been dealt with under internal disciplinary procedures.
3. There is a perfectly satisfactory
explanation for everything, but security forbids its disclosure.
4. It has only gone wrong because of heavy cuts in staff and budget which have stretched supervisory resources beyond their limits.
5. it was a worthwhile experiment, now abandoned, but not before it had provided much valuable data and considerable employment.
For those who haven't seen the clip, [1]. Yes Minister is a brilliant piece of satire (though it does have a somewhat unfortunate Thatcher-esque streak when it comes to discussion of unions -- though it would've been difficult to avoid ridiculing unions in satire from the 1980s).
The episode where Hacker and Humphrey argue about subsidizing the arts is one of my favourites, with the latter taking the position that local league football is commercial and shouldn't be subsidized, whereas arts cannot survive via market forces alone.
Especially since it comes up again in a later episode, with Humphrey discussing the ramifications of civil servants conceding the reigns of power to ministers who'd be under pressure to carry out voter demands.
Humphrey: "How would you feel if Radio 1 played pop music 24 hours a day? Or if they took the culture programmes off of television?"
B responds "I don't know, I never watch them"
Humphrey says "Well neither do I, but it's vital to know that they're there!"
It is definitely brilliant and is one of my favourite satirical shows, but that doesn't mean it's beyond criticism.
My primary criticism is that it didn't actually satirise the government's ideology or policies -- the main criticism was of politicans, civil servants, and interest groups like unions. These were actually in line with Thatcher's ideals and policies at the time (it's therefore unsurprising she said it was her favourite show).
I'm not trying to blunt any of their wit, just point out that (like most works) it had its shortcomings. It's unlikely the BBC would've aired it if it had just been a scathing ridicule of the PM at the time.
I don't think ridicule is bad at all (and their episodes which were quite heavy on the union-bashing had quite a few nuggets of truth in their ridicule of middle management overruling common sense). But focusing on unions as the source of the problem is shifting the focus of criticism away from the actual source of issues -- austerity.
The workers (and thus the unions) were opposing government policies that were hurting them. Funnily enough (though unsurprisingly), strike rates as well as union membership fell under Thatcher because of her union reforms (which removed much of their bargaining power) -- and so the commonly held view of daily strikes in the 1980s (something Yes Minister capitalises on) isn't really an accurate portrayal.
By laying criticism on one and not the other, Yes Minister showed their Thatcher bias. And it's not at all a stretch to say they had a Thatcher bias -- Thatcher herself said that Yes Minister was her favourite show. So while Yes Minister was very heavily critical of the civil service, they weren't fundamentally critical of the views of the government. Thatcher was very strong on civil service reform too.
(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value
A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
> A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
I don't understand this. Claiming that something is an accident and not intentional usually isn't much of an excuse where it comes to the criminal acts.
"obtaining anything of value" could be satisfied by getting personal data which today is akin to profit, but the "intent to defraud" would be hard to prove in court, save for some very broad and dangerous intepretation of "intent" which could equal sloppiness to malice, a precedent that might ruin the lives of honest people who just happen to be clueless sysadmins or developers.
Totally agree though on investigating whether this was really accidental or not; if it was done on purpopse I would expect FB to be hit really hard.
Not a lawyer, but at least in my jurisdiction, fraud requires a monetary loss by the victim.
Generally, civil law is better suited for this sort of thing, no matter how good a pitchfork feels in your hand. As but one of the reasons, the required standard of proof is much lower.
Yeah, 18 USC 1030 (a)(2)(C) might be a better fit:
> Whoever ... intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains ... information from any protected computer ... shall be punished as provided in subsection (c) of this section.
(The definition of "protected computer" encompasses any computer that is "used in or affecting interstate or foreign commerce or communication".)
There’s got to be a monetary loss here. If there isn’t precedent for calculating that loss, such precedent should be established. Our email contacts are valuable, especially at 150m user scale. We could have all banded together and sold them, had Facebook not stolen them. These users should be compensated.
Of course. If the email contacts were of no value, Facebook wouldn't be taking them from accounts. People tend not to to steal worthless assets. Unfortunately, monetary loss for the user may be tougher to prove than monetary gain for the thief.
> There’s got to be a monetary loss here. Our email contacts are valuable.
Why? Nobody lost their contacts, so what’s the $ amount it cost them? Facebook claims they’re deleting them. If that’s true, then Facebook isn’t gaining from the contacts. If users don’t lose anything and if Facebook doesn’t gain anything, what is the monetary loss?
> especially at 150m user scale
Where’s that number coming from? The article talks about 1.5 million users.
> We could have all banded together and sold them, had Facebook not stolen them.
So while it’s entirely true that contacts should never be copied without consent, and that’s exactly what happened, I guess don’t forget that these users consciously gave Facebook their passwords. No matter how much I trust what someone says they’ll do, my email account password gives access to everything in my email account, I’ve always thought it was a terrible terrible idea to ever do it when connecting services together, for this very reason. I’m saying it’s partly the users responsibility, and the outcome here is predictable, because it has been predicted before by many people.
BTW, nothing stopping you from banding together and selling email addresses now, if you think it’s a good idea... the blip with Facebook is not in any way preventing that from happening.
>don’t forget that these users consciously gave Facebook their passwords.
There is a lot of legal precedence about social engineering and how to prosecute it, this would completely fall under fraud. If I ask someone for their password to perform some service and they then I copy all of their data, that is a crime regardless of how stupid they are.
This really doesn't matter at all in a case of fraud if you gave the password willingly, it is under false pretense. If someone asks me to give them something so that they can provide a service or take those things as an investment. I willingly give them those things yes, but we have a written, verbal, or implied contract that they will do and will not do certain things with that information. Failure to follow our agreement and instead robbing me is a crime.
Hey I’m 100% with you. I’m not defending Facebook, and it’s crazy they ask for passwords. But just because Facebook’s at fault doesn’t mean that it’s okay as a user to give out your password, nor does it mean that you lost any money when contacts are copied, right? The words “stealing” and “robbing” don’t really convey what happened here, even in the case Facebook isn’t telling the truth.
You are saying that the words 'rob' and 'steal' don't convey what has happened here, but this is only true in the colloquial sense. There is a good reason why many legal codes and laws start off with a exhaustively long list of definitions. Legal definitions often are different in very subtle ways that maybe aren't apparent at first glance.
If you don't think that this is the proper framing, maybe consider a different one. It is clear that there is definitely room to interpret this as a civil or criminal act regardless of how the parties craft their arguments. For example, imagine an employee that copies company data, even if it has no actual value under their authorized username/password on the last day of work. This is often charged as a clear criminal offense. So to reiterate, employee with authorization to access dataset, copies a large dataset with no obvious monetary value on their last day of work, but one that they weren't given permission to copy. There are cases that have been literally this, and it is easy to see how this incident could line up with this legal approach.
I think you are fixating too much on a critique of the specific charge listed by the top of this thread. I was defending the idea that there would probably be a way to go about mounting a case in that way. You seem to think that this is the incorrect legal framing for this, which is totally fine. The legal process is more of a subjective art than a science.
What made you think I was fixating on anything? I just agreed with you that Facebook's action is at least negligent and could be criminal. I guess I'm fine with the word stealing in the sense of information theft. Still, Facebook claims it was an accident and that the data is being deleted. It might have been intentional, but I'd wait to call it intentional until proven, even though they've done it intentionally in other cases. :)
All I'm really saying is, no matter what, don't give out your password. And if you do, don't pretend to be shocked when something bad happens.
> Nobody lost their contacts, so what’s the $ amount it cost them?
Opportunity cost? If Facebook has these contacts now, then their third parties have them, so those contacts are no longer as valuable, if valuable at all.
> Where’s that number coming from? The article talks about 1.5 million users.
My bad, added two orders of magnitude by accident. I knew something was off there. Thanks for the correction.
> Opportunity cost? If Facebook has these contacts now, then their third parties have them, so those contacts are no longer as valuable, if valuable at all.
We don't know that's true, I would be cautious about making assumptions. But, even if we assume it is, opportunity cost isn't equivalent to financial loss, so we can't say people lost money they weren't already making.
Anyway, I don't think email lists being sold has prevented email addresses from appearing in other lists. It's clear to me that nobody is tracking the value of my email address because marketers keep buying it over and over.
That said, from my point of view, I don't like the idea of selling my own email address or trying to extract money from it. I don't want that, and I don't agree with the idea of selling my privacy in order to battle my concerns about Facebook taking and/or selling my privacy. The selling of my privacy is the very thing I don't want to have happen.
Privacy is not a monetary value for me, it's something I value having, not something I value selling. I don't want it to be subject to capitalist thinking and market analysis.
I think 'monetary loss' has a bit more of a meaning of actual money or assets lost, not potential to earn money that you weren't really planning on using being lost. Not saying I think it's not an issue! But I don't think the term 'monetary loss' is applicable.
Unfortunately not a lawyer so even my creative reinterpretation is moot but I was thinking along the lines of class action. Why can’t that group of people form a class? Is there really no damage here?
Of course we need actual fundamental privacy protection.
The statute says "anything of value." Here the thing of value would be a person's contact list. The attempt to gain this thing of value through deceit (telling the person you are trying to verify their account and using the access they give you to steal their contact list) would be the fraudulent act.
The fact that Facebook put a system in place to obtain these contact lists is evidence on its own of their value, but that value could also be quantified without much difficulty.
The only real question is: was dropping the consent form without removing the feature an honest mistake or was it done because somebody decided it would result in a lower bounce rate and thus more money for Facebook.
If criminal law isn't capable of handling a hacker who hacked 1.5 million victims, criminal law is broken.
(If Facebook changed its name to Lulzsec2.0 of course the FBI would be very interested in the situation.)
And while the previous commenter quoted the part of the CFAA that mentions fraud, fraud isn't necessary to violate the CFAA. All you need to do is exceed authorized access to any internet-connected computer. Is there any doubt that Facebook has admitted to doing that?
It's not hacking. It's social engineering. It's no different than some smooth talking "Nigerian" getting your grandmother to cut a check. No systems were hacked here, no technical errors or design loopholes were exploited. People were persuaded into doing things that gave Facebook the access it needed to obtain the contact info.
There's no law that makes "hacking" a criminal offense. This particular case is just manipulation/social engineering so you probably shouldn't be calling it "hacking" on a message board that's mostly populated by software professionals to whom "hacking" has a meaning that does not include what is basically a con-man trick (though I see you have already edited the parent comment to reflect this).
We were literally just discussing the law that makes hacking a criminal offense. The Computer Fraud and Abuse Act makes it a federal offense; most if not all states also make it a state crime; most if not all other countries also make it a crime in their jurisdictions.
And yes, tricking someone into giving up their password is hacking (as any hacker will tell you), and it is a crime to use that password to swipe someone's contact database.
I'm not sure I can continue this thread with you because it seems you are very confused. I have also not edited any comments here.
Simply asking for email passwords indicates an intent to gain unauthorized access, and disguising the request as being part of a security-enhancing action eliminates all doubt.
- developer A is tasked to create the prompt to ask for username and password of the email account
- developer B is tasked to call some API to upload contacts from email account
- developer C is tasked to bind two functionalities.
Now replace developers with teams and you see how simple is for the average developer to underestimate the scope and the ethical bounds of a given task.
That implies that you, as a developer, then hear new stories like this one and simply ignore any role you may or may not have had in the situation. It implies that you simply ignore that your manager or engineering leadership are asking you to do things that are unethical without informing you about how your work will be used. It implies that you continue to work for that leadership knowing that they will lie to you, hide their true intentions, and use your labor to execute profoundly unethical practices.
It's not news at this point to anyone working at FB what their leadership is engaged in, and what their work is being used to accomplish.
Perhaps several years ago you could claim some kind of ignorance.
That's no longer the case. You know who you work for. Own it.
I’m more surprised that people are still being “surprised” that Facebook isn’t a wholesome company out to make the world a better place through algorithmic social manipulation.
Let’s not be ignorant of the idea of one or two senior developers each given a suitcase full of cash. It’s not like learning to program magically gives you unbreakable ethics.
Even at this point, you’re not getting a mass exodus of workers from Facebook. Those in there are choosing to be there at this point. Koolaid or not.
But you are right, scope creep in the “unethical” aspects and it can suddenly be “no one’s fault”. That isn’t a bad plan.
I’m not one of them, but let me play the devil’s advocate...
You’re getting paid 2x market salary (“market” here being non-Facebook and non-Google, which isn’t any better) and delivering services to people who voluntarily sign up ro them... I mean there are worse jobs in the world.
“That’s a really dick of an idea and I’m pretty sure it’s illegal. Exactly how illegal, I’m not sure. But I know illegal to some degree.”
“You live in a shit apartment because housing prices are stupid and makes your salary meaningless in this town. Here’s a wheelbarrow full of hundreds and we all agree it was an accident.”
I’m dead serious when I say that no two large scale projects are done the same way. I have seen many and can tell you the possibilities are infinite how it gets approached
But seriously. There’s no accident in what happened. This is Facebook. Anyone who thinks Facebook isn’t morally corrupt probably also says “What do you mean Stalin wasn’t a pacifist?”
>what is your tipping point? Would you say no to that assignment?
When FB stops giving them a check.
At least that has been my experience watching programmers at other companies. Unless ethically bound by regulation and law, few people seem to have ethics.
Methinks that was what the OP was asking -- what is the tipping point ? Having differing ethics is fine but one can't just lean on that as a crutch when one has none.
What tipping point? Maybe the engineer themselves who recommends the practice. Everyone have ethics but might be differ than you. As long as the practice meet their goal, from their perspective it works.
From the article it sounds like there was a prompt for permission that got removed:
> Facebook told Gizmodo via email that in May 2016 it made a revision to the registration process, which originally asked the affected users for permission to upload contact lists. That change removed the opt-in prompt, though the company did not realize the underlying functionality was still operating in some cases.
It doesn't take a conspiracy to understand how a bug like that could happen.
This reminds me of the Firefox/Google tweet storm. A bunch of "bugs" or "unintentional feature" that get fixed with a seemingly honest apology, only for another "bug" or "unintentional feature" to take its place.
At some point, it goes from "the occasional bug" to negligence at best, and hostility at worst.
its such a coincidence that these accidents keep happening in ways that enable further data gathering...surely there isn't a larger problem with Facebook's attitude towards their users' private data or anything
"New Facebook Feature Allows User To Cancel Account. ... The company later confirmed that account closures would not stop Facebook from continuing to acquire, permanently store, and sell all information about its current and former users until the day they die." https://www.theonion.com/new-facebook-feature-allows-user-to...
That bug would be a critical failure and be caught, the reverse would be a non-critical bug that the PM decides to to put in the backlog because reasons. If in a year we haven't gotten around the fixing it, then it's time to clean out that backlog!
When every public-facing thing you build is centered on hoovering up data, you're going to have two broad classes of errors. Hoovering up too little data, which doesn't hit the news, and hoovering up too much, which does.
That said, when your "errors" directly line your pockets, you're not entitled to the benefit of the doubt.
It’s pretty hard for me to imagine that there’s some other function that just happens to coincide with accessing different email servers and collect past emails to collect the email addresses.
It was deliberate because of the work involved. The only investigators that think it’s accidental probably believe the internet is a small black box guarded by the “Internet wizards”.
I'm not excusing FB, but it still makes sense. Their whole business model is data collection on their users, graphing connections between these users, and brokering deals with advertisers about users on the platform. When something goes awry, you can bet that it will somehow affect one of those things.
Doing QA at large tech companies is never that simple. You have lots of teams that share code. Imagine a scenario where Team A uses code written by Team B which uses code written by Team C. Team C makes a change to their code that breaks Team B's code but only for the way Team A uses it.
For the people in the back, "Facebook is a multi-billion dollar company." They have 30,000 employees. They could spend the money to do better QA. But it's cheaper to let your end-users do it for free.
It doesn't matter if it's simple or not. We can't hold these 1k+ engineer teams to the same accountability levels as a 3 person team. When, as engineering professionals, are we going to put an end to this? This is completely unacceptable in any other engineering discipline.
One way to combat this is to let other teams register tests in other team's projects. If a test fails, you know it breaks someone's expectations. From there, you work with that team to update both sides.
Just to be devil's advocate: Google is notorious for having separate siloed teams that do not share efficiencies or updates (eg. Hangouts and other messengers). A company like Google and Facebook don't have a good excuse, but we shouldn't be surprised.
> A Facebook spokesperson also told Gizmodo that a screenshot of the original opt-in prompt was not available.
I'm not a conspiracy theorist but if you're trying to claim you cannot capture a screenshot from any release meant to be shipped out, either you're crap at release management or are full of shit. Which one is it?
Also, even if we were to suspend logic and belive this was a bug, what's FB doing to correct it? Are they deleting all uploaded contacts and going to request for consent again?
FB is a cesspit. Get out of the company if you work there and get out of the platform in any case.
> I'm not a conspiracy theorist but if you're trying to claim you cannot capture a screenshot from any release meant to be shipped out, either you're crap at release management or are full of shit. Which one is it?
That doesn't strike me as especially unlikely, especially for a specific branch of the app codebase that would likely only operate with a huge number of other co-dependent codebases for backend systems that no longer exist.
With six months to recover code and build a non-live environment with all the dependencies could it be done? Sure. But that's not really within the scope of a journalist request.
I would say it should take less than an hour for a dev to get an instance of a specific revision up and running, not months...but I agree with the thrust of your message: when the reporter asked their Facebook representative for a screenshot of the box, they looked in their pictures folder for a screenshot. They probably did not try to spin up an instance for the developer.
“For the FB employees reading this: what is your tipping point? Would you say no to that assignment?”
There is a good chance that they didn’t know how their work would eventually be used. That’s the problem with big companies. Most people are far away from seeing the consequences of their work.
The tipping point is when the utility value of their paychecks no longer exceed their personal sense of responsibility about the system they're complicit in.
In The Fine Article, it says that the feature was built on purpose, and previously asked for permission. The accident is that it wasn't completely removed.
They did have the upload-your-address-book functionality before they instituted this check. I’m very much hoping to see Facebook suffer for this, but I could conceivably see a scenario where they reused code that did more than they wanted.
It also takes extra work to ask consent. You build it. You don't notice that your confirmation screen fails to trigger. You've just unintentionally uploaded a bunch of data without consent, when your intention was to do it with consent.
It's still pretty darn negligent, but it's easy to see how it could be done unintentionally.
> It takes extra work to upload those contacts, which means several managers and developers decided to do it and then spent time implementing it.
Not really. Facebook is a bunch of autonomous services (registration, access, tracking, activities, etc.) accessing shared databases (chat logs, activities, media uploads, etc.) with some kind of automatic implicit and explicit ACL in place. The suggestion/contact service got access to data provided through the email-not-working-with-oauth-so-let-us-use-automatic-token-delivery-and-confirmation-by-accessing-user-emails because it was told a new source of contacts were available for those users. So, not a straight path.
Accident/Blunder > Evil.
Now. GDPR ? GDPR. And because of GDPR those things aren't supposed to happen in Europe.
Considering vast crowds of folks happily working for amoral places like investment banks (2008 crisis and its consequences) or wealth management (rich folks trying to keep as much money untaxed as possible and used for public spending), the moral bar for usual smart person is actually pretty low. Optimizing some ads seems pretty harmless when compared to.
As long as you don't see the evil being literally done ie in form or row of inmates being sent to gas chambers, there are almost endless ways to persuade yourself that all is actually OK and fine.
First they ask for email passwords. Then the new users assume Facebook won't comprehensively mine their emails. Then Facebook awkwardly gets caught uploading 1.5 million users' email contacts.
It doesn't make sense for people to trust the service at all unless you assume one of two things:
1 - Despite all the outrage on hackernews, and the NWT stories, our neighbours down the street and family members still don't know how Facebook works or what is done with their data
2 - They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories. I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting.
Group #2 somehow lacks the imagination to see what could go wrong. They will learn when a cause effect of Facebook usage is put in their face. I guess the recent news does not push it in their face enough.
Its like that with skimming, lock picking, server security, infrastructure security, basically everything security related.
>They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories.
"People don't care about a problem initially, then when it becomes graver they start to care"
I try to be an advocate for privacy. I really do. But everyone just calls me paranoid, asks why I need to be worried about my government like I have something to hide, or just stares blankly at me because they can't be bothered to actually think about the words climbing through their ears.
I'm going mental over the explosion of televisions in the last half decade which identify and report any content you watch on the TV by default, in exchange for 100-150 off the television (which was fluff to begin with... it's not a direct trade of $100 for your data).
I've set up about a dozen of these now for people and they just stare blankly while I try to explain what "Auto Content Recognition" means... Hello 1984.
I'm right there with you. Most people write it off as paranoia and give the tired "Doesn't matter if you have nothing to hide". I think the problem is the only reason we have to convince people to change is just principles. There is no existential reason right now to convince people to change their behavior. Sure it's scary having large entities vacuum up our data and spy on us but so far there are no real bad effects on the users being spied on that I know of.
These companies would not be wrapping their practices in secrecy and lobbying against user rights unless there were sinister aspects of Big Data.
And it's not just principles. The effects of Big Data are extremely tangible even if not to you in this particular time or space. Some feel the effects now, others will feel them later in life.
I'm worried my children or grandchildren could one day be denied healthcare or adequate education or loans or low insurance payments just because of my attitude towards my government and every other aspect of my lifestyle which gets swept up and analyzed by for-profit robot armies bent on achieving the Holy Margin.
I think the backlash is mostly just delayed. At one point revenue will take a hit because engineers might refuse to implement these "unintentional" and "accidental" features on time.
There is no doubt that the public image about FB is significantly changing - a year from now things will not look better for Facebook then they are today, most likely worse I'd say. This is not something they can turn around anymore - the leadership is not making any learnings and repeats the same mistakes over and over again.
You're absolutely right. Yet for some reason it seems popular to discount that possibility. Particularly when invoking the thought terminating cliche that is "Hanlon's Razor."
Only idiots invoke "Hanlon's Razor" not jokingly. It's such an incredibly shallow and stupid thought indeed. I like the way you put it: "thought terminating." That's what stupid people generally prefer to do with arguments.
> "I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting."
I think you hit the nail on the head. Even on HN, it's not uncommon to see a few comments on each negative story about facebook accusing the media of a conspiracy against Facebook; claiming that the media is wrongly maligning Facebook who is merely the unfortunate victim of a series of coincidental accidents.
They have trouble accepting that a tech corporation like facebook actually might be rotten.
I think you see that more with other tech companies.
There is a certain amount of anti-silicon valley sentiment in the media and as a result there are a lot of stories maligning tech companies in ways that aren't always fair. Especially when the media companies are campaigning for some kind of problematic legislation that the tech companies are on the other side of and so will take any excuse to try to make them look bad.
Then there's Facebook, about which nobody has time to write a story maligning them unfairly because there is never that long between any of the stories maligning them fairly.
>They have trouble accepting that a tech corporation like facebook actually might be rotten.
Probably because they work for an evil tech corporation as well (this is HN, where SV techbros hang out, so the probability of that it pretty high) and want to ease their cognitive dissonance.
FB has said they'll be notifying the people whose contacts they "unintentionally" uploaded. How about notifying those contacts whose private details they illicitly obtained that their privacy has been compromised by Facebook - the innocents who signed up for FB and had their contact-list stolen (let's call it what it is) may or may not feel any moral obligation (more likely, don't even see the issue) to notify their friends/family/plumber whose details they "lost" to a thief.
People can sue if they can find some claim to real-life damages... You'd only need a small percentage of the 1.5 million people and FB would probably settle out of court.
How about consumer and privacy laws? I know it varies from country to country but the government can sue and fine companies and people in order to protect it's citizens. I know I know.. I'm old fashioned like that.
How about notifying those contacts whose private details they illicitly obtained that their privacy has been compromised
Because there's a difference between "we screwed up and obtained this" and "we screwed up, obtained this, then used it. Hope our use didn't result in any problems for you."
That's right. Whenever a computer system is breached, it is the breacher's responsibility to notification the affected people, not the entity entrusted with the information. That's why it's generally agreed that Equifax did nothing wrong when credit data was accessed.
This seems like a case similar to the Google WiFi data collection. Code written for one reason was reused in a different project without understanding what it would do.
Here’s an example page from 2011 talking about facebook’s old feature to import contacts via providing them your email username and password. This was at a point when many web mail services didn’t offer an OAuth API to do this, so it did make some sense at the time. It was still safer to do a csv export and then import, but much easier for users to provide the password directly.
> Type your email address and password for the Web-based email or instant-messaging service that you want to import into the dialog boxes and click "Find Friends."
I thought of this as well. One difference, at least subjectively, is that Google seems to make far fewer of such mistakes.
Just as with people, it’s sometimes difficult to judge them for a single act. Only by aggregating behavior over time can we learn of their true character.
LinkedIn pulled something similar a few years back. At the time, I was using the same password for both my email and LinkedIn account, and found that people from my email address book were showing up as suggested connections. I can only assume "consent" for this was buried in the T&Cs.
In case of Linkedin, they do ask for consent. But, even if you didn't allow it to export your contacts, other people may have allowed it, linking their linkedin profile with your email address. LinkedIn then shows them to you as suggested connections.
This personally bothered me for a while then.
Since FB has gone out of their way to weaponize "friendship", my suggestion to everyone who actually likes to have some standards in their life and don't like to be manipulated like that is simple. Just do it back to them. "Unfriend" (IRL) everyone you know who works at Facebook and tell them you will "friend" them back once they leave the company.
Judging from other comments this is an unpopular idea, but why does business get to be some sort of quasi morality-free zone where nobody has to take responsibility for anything? If a friend works for a company that engages in activity that I find morally reprehensible, why shouldn't this affect our friendship? I think our society could really benefit from a little accountability, so in lieu of regulations and laws protecting us from corporations I think protecting our social circles from people who endorse the bad actions of their employers because "it's just business" is perfectly okay.
I see other comments talking about personal responsibility, but in the case of FB the notion of a company selling their data is too abstract to clearly understand the risks/consequences for many. Should we put no responsibility on corporations to act civilly or at least legally? Should one not have a personal responsibility to engage only with corporate entities that behave civilly/lawfully/etc? I really don't understand this mindset.
Or maybe people could learn again "personal responsibility" and realize that everything they give to facebook is exactly like giving oyur life to any compagny like Coca Cola, and that these compagny can do pretty much anything with it within the limits of the "laws that are actually enforced", whose number is pretty much 0.
You're right and I appreciate your comment. You probably noticed taking responsibility is extremely unpopular nowadays in all aspects of life, most noticeable in politics probably.
People aren’t entitled to commit cyber crimes, just because it’s a convenient way to pay the bills.
So far as I can tell, this was Facebook exceeding authorized access to a computer system — at scale. If you or I did this, we’d be looking at felony charges.
I wouldn't say I necessarily agree with doing this when it comes to Facebook, but is there really no circumstance in which you think it'd be justified to cut off contact with a friend because of where they work?
For instance, if I had a friend whose job it was to design missiles that are used to bomb innocent people (Lockheed-Martin for instance) I would seriously reconsider my friendship with that person. Yes, it's "just their job" but choosing to have a job which requires having such warped ethics would make me reconsider whether I want to continue associating with them.
Nobody is forced to work at such companies. Yes, effectively all companies do things which we don't agree with on some level (unimaginably large amounts of tax avoidance being the most obvious example). But if a company's ethics are completely antithetical to your own, then I don't see how you could morally justify working for them.
(Obviously there are some understandable exceptions to the above -- the most obvious being that in the US employees are effectively blackmailed into working for their employer because they'll lose their heath insurance otherwise.)
On its face the statement is true. LM does design missiles, and some non-zero number of them have been used to kill innocent people.
I'm curious what part of the statement is important to you in making that decision though. Is it that LM is part of the military-industrial complex, full stop? That the weapons are used by the US military? That they are sold to and used by other governments? Would LM be acceptable if they created weapons that magically never harmed the innocent? What if they occasionally harmed the innocent but were always used by people with good intentions who were doing things you supported?
I was using Lockheed-Martin as an example of a "clearly immoral" company, you could replace it with any other example you can think of and the point would be the same (that at some point you have to accept that ignoring your morals in order to get a paycheck means you don't really have those morals).
As for my personal view, it's fairly clear that Lockheed-Martin props up (through lobbying) and profits (through government contracts) from the US war machine -- which in turn has killed millions of innocent civilians. And then there's the contractors that Lockheed-Martin has provided to government agencies to further strengthen the surveillance tools of the NSA, CIA, FBI, and so on. So, I think Lockheed-Martin was a good example of a "clearly immoral" company.
EDIT: You changed your comment after I responded to it. I don't think the ethics of hypothetical magic missiles is a super useful conversation to have (changes in technology don't change our underlying ethics, they just change what ethical questions are being asked).
On the question about unintended consequences, obviously in wars you can't guarantee zero civilian casualties and innocent bloodshed is inevitable (though still unjustifiable). But the US is currently engaged in several illegal wars of aggression (which is a crime under international law) and clearly planning to engage in several more. Personally, I think the "unintended consequences are inevitable in war" defense isn't available to you if the war itself was illegal from the outset.
Suppose we reverse that sentiment a little bit: I would never friend someone again who knowingly manipulated me to suit their employer and ultimately themselves. Which, once you remove all the layers of abstraction, is what it boils down to.
"Suppose you were bereft of morals, and suppose you were working at Facebook; but I repeat myself."
This may be an unpopular opinion, but things like this happen. Someone gets the task to implement a login and either doesn't realize they should be using OAuth or is simply too lazy to do so. Next, someone has the idea to suggest friends, so let's grab some email contacts for that purpose.
That stuff happens all the time at small companies. While it's certainly bad practice, it's often not evil intent, but just lack of technical skills (for the former issue) and missing sense for potential privacy issues (for the latter).
In case of a large company like Facebook, one could expect they'd have processes and education in place to prevent such incidents, but I guess this happened a while back when FB was much smaller than it is now.
> This may be an unpopular opinion, but things like this happen.
Yes, and at Facebook in the context of data gathering they seem to happen ALL THE TIME. So if they did actually care about privacy they'd make changes to curb these sort of "mistakes", but taken in aggregate the relentless "bugs" show a pattern of willful malevolence.
But it's been over a decade of these types of reports about FB and their behavior. FB should be asymptoting towards good ethical standards and software practices. These reports should be getting more and more rare.
Instead, they seem to be growing exponentially away from good ethics and practices [0]. It feels like it's getting worse, faster, not less worse and slower.
Not for one second I believe this was unintentionally. After all data scandals where Facebook didn't actively care or even empowered the problem by not acting towards privacy.
I think this company is inherently bad from the top and everyone working there is enabling them. Sure, it pays well.
Problem is, most bigger companies do bad things. See VW and the emission scandal and I hope Winterkorn and other top managers goes to jail for that. Also I'm biased, for me Facebook and Instagram are pretty useless, the only useful product they have is Whatsapp...
Can't someone file a class action lawsuit against Facebook?
I mean, it's nice that they are deleting the information now, but they clearly did something wrong, and by basic standards, they should be punished. And the deleting the stolen information isn't punishment, and since they probably won't delete any new ad targeting information they gathered as a conclusion from the contacts, they are still profiting from it, so the punishment should be more then just a small fine (that I hope they get).
I'm just sick of them (and other companies) "accidentally" doing something wrong, and barely get a slap on the wrist.
There already is a $78B class action lawsuit against Facebook over the Cambridge Analytica scandal. $1000 per American whose information was harvested. It's hard to google for however.
>Facebook says that it didn't mean to upload these contacts
How can you not mean to? It's one thing to say that, were it something tangible, like paper, "Sorry, mate. These pages snuck in with the others. Sorry about that. We'll pull it out. No worries."
Pulling contacts and uploading them is not a passive action but takes active action.
>and is now in the process of deleting them.
So, the question must then be asked: How do they differentiate the sources of contacts associated with an account, unless they're logging that, as well? If they're not logging that, then how are they, presumably, deleting those contacts?
Are we taking bets on Facebook being in the news again, in a months' or so time, for being found to not have deleted them? :)
> Pulling contacts and uploading them is not a passive action but takes active action.
Action such as "accidentally" asking for email passwords. It is quite remarkable how these accidents line up just so.
Grammar-checking programs should be flagging any use of "accident", "accidentally", "unintended" and "unintentionally" whenever they appear in the same sentence as "Facebook" and are not within quotes.
Indeed. Expect the next headline to be, "Facebook 'unintentionally failed to delete' 1.5M people's contacts, which they'd previously unintentionally uploaded".
This seems like 'growth hacking' gone wrong. Facebook's growth has been loosing momentum for several year's now and it seems to me they are trying to make up for it by using every trick they have up their sleeves.
They might want to overthink their motto 'Move fast and break things'.
It's my understanding that they used to do this entirely intentionally at one point via an "import contacts from mail" feature, then they dropped the feature and now when they added the "sign in with e-mail to verify your identity" feature someone reused the old code without being aware that it will also harvest the contacts and that they don't want that this time.
It's the opposite of "privacy by default", basically.
I don't recall ever hearing that Facebook made a mistake which decreased the amount of data they collected or their usage thereof. Can anyone provide an example?
I'm sure Facebook has had bugs that broke various forms of data collection, or missed data they could have collected. We wouldn't hear about it, but it would be surprising if it hadn't happened.
At some point, some government is going to have to step in and stop Facebook. Five years ago, I would not have believed that I would have supported government action. Now, I’m afraid for the future if there is no intervention.
It only makes the job of intelligence agencies easier, a tiny part of an government. And one that wouldn't play any role in intervening in a company like proposed.
Well yeah, I’m sure they do, but at what cost? The same data collection that government loves so much has been misused to throw elections and genuinely cast doubt upon the democratic process. Ultimately, creating this big giant drag nets has only empowered companies to demonstrate a complete lack of regard for humanity.
If that’s security, I no longer want to be secure.
I don't know if you've followed the news, but multiple governments have investigated, sued and fined Facebook. A quick Google indicates Facebook may end up paying 1.6 billion to the EU. The UK is doing an investigation too, with FB's impact on the Brexit referendum, as well as the whole Cambridge Analytica thing.
If you're thinking Facebook is getting away with it, you're wrong.
Of course, they're mainly getting fined; if that isn't harsh enough punishment then I don't know what to do next, that's dangerous territory.
Eventually fines can exceed revenue. There are also laws which allow board members to be directly liable, one of which (I have been told by trade-union-funded free legal aid) is the UK can go after board members who knowingly trade while insolvent — and demand they personally pay the debts.
Another (which is merely me reading the law and therefore probably doesn’t mean what I think it does) is prison time and equipment seizure if a business engages in copyright infringement commercially.
> that isn't harsh enough punishment then I don't know what to do next,
Split the business into smaller, independent ones. We've seen this before. There's enough services hiding inside FB that treating them like a monopoly is not a terrible idea.
1.) The US FTC really needs to update its working definition of a monopoly. “Consumer welfare” is normally shown via price and since free services are always free, it’s a tough thing to argue.
2.) Facebook owns about 70% of the social networking space, and Google and Facebook have a virtual lock on online advertising. Moreover, through its share buttons, Facebook has created a web full of data gathering - the sheer amount of information they have makes them very hard to compete against. Add in some regulatory issues in the Instagram and Whatsapp regulations and there’s an image of a company that’s just about impossible to compete against and that has used its clout to bring net harm to consumers.
They don't. I meant the similar approach of splitting them up would make both the regulation easier and self-regulation more incentivise - the same reason monopolies are split.
Facebook faces a small punishment or perhaps a public rebuke from some politician, someone from the company makes some lame statement about how they’re committed to do better, and then within two weeks another story comes out that demonstrates they don’t give a shit about their users.
Facebook has thumbed its nose at every single attempt to rein it in. The next steps are dangerous territory, but only for companies that behave in tremendously antisocial ways. It would be a net win for the rest of us.
Well, not much is changing after these kind of fines.
> A quick Google indicates Facebook may end up paying 1.6 billion to the EU.
A slap on the wrist, Facebook had 8 billion in revenue Q1 2019. But first let's see if they actually end up paying that.
It's just like how banks change after receiving massive fines for their role in the crisis, money laundering, transacting to sanctioned countries (they don't really, aside from some minor internal processes to prevent the exact same thing from happening again).
I don't think personal liability for white collar crime would be "dangerous." Someone either signed off on this, or negligently let it happen, and they should own it. Unfortunately, we'll probably just keep fining the company, and occasionally dragging Zuck in front of Congress for a bit of scolding and boilerplate apologizing.
Phones need better features to entirely prevent these things - so apps can't trick the user. I want no application to have access, something like Incognito mode for all apps basically. The permission dialogues are typically not very helpful to make a meaningful decision and apps don't function at all without certain permissions. So why not allow to "fake" contacts,storage,location,etc...
This could be done previously with on custom Android builds with XPrivacy (an XPosed module).
It worked quite well for a long time, but tended to be quite a burden to maintain through OS updates. Starting with Oreo or so it no longer worked, but there was another similar module that had much of its functionality.
It could even go as far as exposing a subset of your address book to an app. So, for example, when I wanted to use WhatsApp I could just show it the 3 contacts that I wanted it to see.
The operating system should sandbox every app and by default provide it fake data for everything. The user should say what they really want to allow the app to access.
I eventually switched to an iPhone and just don't install many apps.
iOS has a prompt before your address book/contacts are shared with any app and apps will always work without it (required by dev guidelines).
However note that this article is not referring to the Facebook mobile app accessing the mobile contacts -- this is about their service logging into a person's email service (like GMail) and downloading their email contacts.
This is not at all abnormal behavior. I'm kind of amazed consistently by the lack of awareness of the HN crowd of the habits of most users. Most people do not think about what they do on a computer even a fraction as often as a developer or other user here would.
My mother, for example, does not really understand that websites are run by individual entities. There's one "internet" and all websites are kind of like a strip mall under general management, so in her mind if one page on facebook askes for a password to read my email, how is that any different than reading my email on on the yahooo page.
All she knows is Facebook, an "official" website asked for a password.
Convenience. That is, Facebook - and others, like Skype - tells new users that the easiest and quickest way to find your friends is to send them your contacts so they can cross-reference the users.
And that, including me not paying attention, is how all my e-mail contacts got an email from facebook where I invited them to FB. That wasn't the intent!
Interestingly, WhatsApp (and Telegram, and Signal) don't even ask and just upload all your contacts' phone numbers (this is before Android had the prompt "Allow this app access to your contacts?). It's very convenient, and also very sad.
Also sad is the fact that BlackBerry already had a fine-grained permissions systems pre-iPhone days, but it took iPhone and Android many many versions and years before they built such privacy controls (but yeah "We care about our costumer's privacy"
- Apple). And Google didn't even care about privacy back then I remember the Google Maps app for BlackBerry just prompts you "Please give us all the permissions we want or this app will just exit now." on startup, when you've denied it a permission or two.
Signal doesn't upload your contact's details anywhere. It hashes your phone number and sends that to a central service that knows which hashed numbers have Signal. Then it periodically asks that service whether hashes of contact numbers are in the list in order to decide whether to suggest Signal for them instead of unencrypted messages.
It turns out that some people genuinely are forgetful enough that if they told their iPhone Bob's number, email address and shoe size in 2016 and then in 2019 their phone finds out that phone number is registered for Signal, they will conclude that the phone must have learned Bob's details from Signal, which in turn stole them from Bob as part of some nefarious plan.
You can't do anything about this, it's like the Spam problem. If you send ten million very, very useful emails that are genuinely valued by every human recipient, hundreds of them will be flagged "spam" because Humans aren't very good at this sort of thing. They press the wrong button or they've been using "mark as spam" because they thought it's "mark as read" or they meant to mark the one below it, or above it.
Permissions to read text messages is another one that gets me. I know not many people use SMS as their primary communication but how can you be so astonishingly blasé about your data to save typing in a code?
Thankfully there is now the SMS Retriever API that lets you do this without having access to all messages, and the Play Store no longer allows apps that require this permission without SMS handling being a core functionality of the app.
People who aren't very tech/privacy-savvy, like elderly people, or kids/teenagers.
I remember signing up for facebook when I was in high school, and I probably would've provided my email password if facebook asked for it...as an adult now I wouldn't provide my email password to anyone, of course.
Most people don't understand OAuth, so they don't know the difference between OAuth and giving out their password. Most people don't know they're doing this with bank scrapers like Mint!
I myself have had trouble figuring out whether certain dialogs were OAuth dialogs or just skimming my password, and I've been in web software for 20 years. A layperson has no chance.
It's a pretty easy mistake to make when you're new to the web, or simply don't care all that much how it works. I made the mistake of giving someone my contacts once when I was new to this stuff, and had many apology emails to send when my friends were spammed as a result. It was a harsh lesson in the web's fundamental hostility.
There was a class action lawsuit against them (LinkedIn Lost it, iirc) for what they did. I believe they would try to connect you with any of your email contacts if you logged in with OAuth.
SOFORT quite explicitly ~scraps~ scrapes the entire available transaction history for „your convenience” (much more is available with access login and password actually). What a satisfaction when they tried to enter Polish market and the Polish finance controlling authorities shut them down before they managed to squeek. The famous German „privacy” it is.
They claim it's to see that the customer is liquid enough, so more for the convenience of the seller.
It's incredible that the banks tolerated this service even though they told their customers to not to give their credentials to a 3rd party. Or not just banks, how about the German Federal Office for Information Security.
I wish the bank would just block accounts who they detect used the service with an error like "We think your credentials have been compromised" (then again the stupid customer will think it's the bank who got breached). Or give them a fine of e.g. 100 Euro for breaching their user agreement. Then again, this would lose them so many pissed-off customers.
The idea of handing over my banking password to any third party is crazy. Mind you, I'd love an API that I could use to easily pull all of my banking details into my local system. There are a few ways to do this currently, but nothing simple, open, and standard.
P.S. As you seem to be a non-native English speaker, the word you wanted to use was "scrapes" not "scraps".
LinkedIn was pretty bad, but Facebook was saying that your login information was only going to get used to verify your email. smt88 has a good analogy up there
How is LinkedIn not under more scrutiny right now? They used to ask for my email password all the time along with re-asking for access to contacts at EVERY LOGIN.
I know this isn’t a contest, but I always felt LinkedIn was twice as scummy as fb.
Why are companies even asking users to provide passwords for unrelated services? For example, when I added an external account on Etrade, they gave me the option of same day verification of that account if I provided them my online banking account credentials.
This practice opens up a significant potential for abuse and should be illegal.
Yes, but that doesn't mean that someone else needs my credentials to verify it. They should have their own independent method of verification. Do I need to give you my online banking's user name and password in order for you to send me money?
The only way FB will change its ways is if (a) good engineers stop joining them, and (b) good engineers at FB start leaving. This will threaten their entire growth prospectives and finally bring about change.
I was having discussions with FB recruiter and some of their senior managers. I just informed them that I won't be pursuing that anymore.
FB engineers who are on HN: why are you still there? You can make similar money at several other companies without sacrificing your soul!
The tech industry worships money and those who make it, and there are plenty of engineers who'd take the FB compensation package in a heartbeat, regardless of FB's public image problem.
This idea that the public will act together morally to stop corporate malfeasance while sacrificing their good fortunes isn't that realistic. Look at the FB shareholder situation. Lots of shareholders are angry at Zuck but can't do anything about it. None of them seem particularly interested in selling their shares because they don't want to have to pay for his bad behavior.
But there's some kind of inertia keeping those employees at FB for some reason. Why put in all the effort to leave for another company to get paid the same? Most people won't do that.
Engineers aren't going to start quitting en masse until their compensation is threatened (i.e. the stock irreversibly tanks). In order for that to happen, shareholders need to stage a massive sell-off, which won't likely happen soon due to FOMO.
>You can make similar money at several other companies without sacrificing your soul!
Google collects significantly more data than Facebook, and has a sordid past with sexual harassment and inappropriate relationships. Lyft, Uber, and AirBnB have openly flouted regulations, and that doesn't count Uber's other scandals. LinkedIn grew by emailing everyone's contacts without their permission (if you think what FB did here is bad, LI was far worse). High frequency trading and other fintech companies engage in front-running and derivatives trading that may be contributing to market volatility and systemic risk.
Comparably paying companies pretty much all have questionable histories.
Meanwhile, Mark Zuckerberg has committed to investing in improving Facebook even at great expense (don't believe me? look up what triggered the nosedive in Facebook's stock last summer). Do you think Facebook will improve more if conscientious engineers left the company?
Who's the "et al" there? Microsoft and Apple would be the logical successors, but I don't think most people would consider them "just as bad as Facebook".
My, how times have changed, if Microsoft is not considered a bad actor. There was a time when MS was Satan personified for many people in the OSS community (myself being one of them).
Please don’t downvote me into being the same color as the page background. I’m giving a serious answer to a question that was posed.
This has been asked before on HN. The genuine answer is some combination of:
* criticisms of FB are wildly exaggerated. This takes many forms, but in this particular case I think it’s the issue of attributing to malice what’s best explained by incompetence. Somebody probably just reused some old email importing code without understanding it thoroughly. If you know anything about how FB works, that’s infinitely more plausible than some shady conspiracy to unethically harvest the contacts of a small percentage of users for a slight improvement in ranking or targeting.
Facebook is not some well-oiled machine, it is a jumbled mess of thousands of junior engineers, perpetually barely avoiding collapsing under its own weight.
* People inside FB generally believe, whatever they think of Zuck, that he doesn’t just outright lie about verifiable facts. The entire code repository is completely open to all employees. If adding this feature really was malicious and FB’s response is an outright lie, somebody WILL find the commit and leak it.
* Even if FB is doing harm, on balance the good it’s doing is greater. It has made communication between humans easier and lower-friction which has many upsides.
Part of this is that all the upsides are concrete and obvious (people fall in love on Facebook/IG/MN/WA, they stay in touch with friends and family, they run a business, etc). Whereas the downsides are abstract and hypothetical (maybe someday someone will use Facebook’s collected data for some nefarious purpose).
* Even if all of the above is false and FB really is harmful to the world, the situation certainly won’t be improved by thinking people quitting, and leaving the company totally in the hands of yes-men who drink all the kool-aid.
Hi FB employee -- I very much disagree with your position. The external reality is far from what you have said.
* Criticisms of FB are not exaggerated. In this case, FB stole 1.5 million creds, then used these creds to harvest user data (nobody actually wants to give this data away, it was taken by force). If an individual did this, they would be in prison. FB gets away with it... again..
* People inside FB are a cult. It has been shown that MZ will lie about verifiable facts, even to congress! Its wilful suspension of reality for the sake of a huge paycheck. ... "when his salary depends upon his not understanding it!"
* FB is doing harm, and Its not on balance greater. Its divisive, promotes untruth, gives a voice to those that really shouldn't have a voice, spreads misinformation, promotes hate and dismantles democracy. Not to mention... Ostracization? Murder? Genocide? Exactly how many FB whistleblowers have existed in history?
Is this worth it so people can share cat and dog pictures, and stolen memes from other media? I'd say, no.
* One of the most troubling moments in my recent history was visiting India, and seeing how FB is so influential in general discourse. I had people tell me great, unjust untruths like they were facts -- "I read it on facebook".
* If FB has no users anymore, it has no advertisers, it goes away. Better for the world!
FB is arguably the most destructive force of the 21st century. We will never be free until we shake its iron grip on humanity.
Please don’t imply that I’m speaking in bad faith and actually just motivated by money. Consider the possibility that I truly believe what I’m saying — anything less is just a horrible way to debate.
I am quite capable of making similar sums outside of Facebook. In fact I plan to leave soon for reasons that have nothing to do with ethics, and I don’t foresee myself changing my opinion once I’m no longer an employee.
Now to answer your specific points:
* Facebook did not steal credentials. They were willingly given.
* “Nobody actually wants to give this data away” how do you know? Do you have polling data on this? My personal belief is that most people don’t care at all.
Also you’re completely ignoring my assertion that it probably was just an accident. Hard to argue that something was “taken by force” by accident.
* Can you give me some examples of Zuckerberg knowingly lying about objective, verifiable facts to Congress?
* Well, your guess is as good as mine whether it’s better or worse on balance. My intuition is that it’s better. You haven’t really argued against this, just given some examples of the worst possible downsides and asking if they’re worth the most trivial upsides (conveniently ignoring the real value of communication tools in people’s lives, which has nothing to do with dog pictures and memes).
In my experience, FB isn’t divisive at all. I use it to talk daily to people who have become very close friends and who live in a different city (my home town). Without that connection, I would be extremely lonely.
* I’ve heard plenty of people in the US tell me great, unjust untruths like they were facts. They saw them on TV or heard them on the radio.
* As for FB being the most destructive force, I think you’d have to give that title to climate change, resource depletion, terrorism, and war.
I accept that you believe it, but I wish to challenge your beliefs. I hear the same talking points from most FB employees. I suspect its the local memes reinforcing cognitive dissonance.
* "Wallet inspector". I don't think the authorities would let you off if you claimed to socially engineer (steal) someones wallet. People wanted their FB supplied dopamine hit, and handing over email creds was the only thing in their way. It is coercion.
* Another FB meme -- 'people dont care so we can do what we like'. People lack the specific understanding of what they give up. It is coersive to take advantage of people like this. We talk of informed consent. FB existence is reliant on action without informed consent.
https://news.gallup.com/poll/232343/worries-personal-data-to... recent poll showed 55% of users are concerned about FB selling there data. That's a majority. I think society is growing wiser in time. My hope is the social climate matures to understand what the individual gives up by using these services.
* MZ lies to congress: "we don't sell data to anyone." Mental gymnastics to make this true. Its the entire business model of FB, selling user data to advertisers -- sure, its not the raw bytes the user uploaded (however, they provided lots of record data to 3rd parties). The social graph is data, user data, and it is sold to advertisers, integration providers, hardware vendors, etc etc. How is MZ not a liar about privacy, again and again?
* I am ignoring your assertion it was an accident. Stealing credentials isnt an accident. Full take logging (capturing creds) on your HTTP gateways is an accident (kinda). Deploying credential stealing walls is no accident. Deploying code that uses these credentials to harvest address books is no accident. Its a chain of malicious actions. Cannot be an accident.
* We have legislative and industry standards for Radio and TV (aka legacy media) to ensure that untruths don't get very far. FB, not so much. Yes, all media can be a source for misinformation, but FB really is king here.
* I dont use facebook and am extremely lonely.
* Point taken, there are worse things in this world. But to me, this is the most visible, and most actionable, today.
Thanks for engaging. I don't really know why I decided to write all of this, but Im feeling mad over this credential harvesting. Its yet another strike.
I would assume that FB has gotten pretty good at hiring devs that match their culture. FB devs aren't reading HN articles about how bad FB is. FB devs are at FB because of the "prestige" of having been selected from the 10's of thousands of candidates. They're there for the money. They're there because they enjoy the projects. They're not there because they have some moral obligation to change FB.
This just isn’t true. Plenty of people at FB read HN, everyone is acutely aware of the company’s reputation, and there is robust internal discussion and debate about all of these topics (and more)
Apparently Facebook is claiming that the functionality came from a separate "import contacts" feature that used to exist. But I agree; the idea that the import logic could have slipped into the login process accidentally is ludicrous. Or at least it indicates an outrageous lack of care on Facebook's part.
That's not exactly true, it depends on the architecture in use. For instance if using a Publish/Subscribe model, you could have had a service that listens to your email being connected, and since the only reason to connect your email was to upload contacts, it would upload contacts automatically.
Later when login with email was added, the same event was sent but whomever added the event didn't know it would case the upload of contacts.
That doesn't mean it wasn't shoddy craftsmanship, bad architecture, bad QA and probably bad communication later on, but it could have been by mistake (at least at first).
It requires just one developer and a couple of reviewers to make poor choices.
Which begs the question, how do you structure your organisation such that a foolish developer that only barely understands the change that they are making can't write code that makes arbitrary queries to particular data sets in unapproved contexts?
Start by keeping the primary copy of the user's data on the user's own device so that the developers never have access to it to begin with. Then, if you ever have to hold a copy of the user's data, make sure it's encrypted by the client and your servers are never in possession of the plaintext.
To access the user's data, your developers should have to intentionally crack the user's password. And if they attempt to do that they should be fired.
Obviously this is not how Facebook works, but ideally it's how the thing that replaces Facebook will work.
There should be a name for this sort of software design. It's not just encrypted/privacy-oriented or whatever. It's a software design with a clear contract on who owns the data: the user.
E.g. Google Drive, which claims to take privacy seriously and also encrypts your data. But the data is not encrypted with a secret unknown to the server. How should my family members differentiate between the encryption Google claims it has and client-side encryption? For them it's all the same.
Maybe we need some commonly understandable name that a regular user can look at and know that this software is data-agnostic.
I don't think you need a board to stop this specific case. It's pretty obvious that what they were doing is unacceptable. The problem must have been a pervasive culture of lack of respect for privacy at Facebook, not a single engineer who somehow just didn't know any better.
Situations can get complicated. There might have been some side show reason to do this or that. Without oversight, some things will fall through the cracks.
A review board would a) give clear direction b) catch problems and c) put accountability where it belongs.
> It's an organizational policy, procedural, ethics and legal question - not a technical one.
Not really. You have to fall back to those things when a good technical solution isn't available, but sometimes it is.
Suppose you have a car, and four children. You can enact all the laws and policies and procedures you like, you can lecture the kids not to misbehave a thousand times. But the most important thing you can do, if you really don't want them out joyriding in the street, is to not give them the keys to the car.
The Engineers at FB should be given product direction, and should not even be making decisions as to what information to ask users for. That's for Product.
Part of product management will involve legal review, risk management. Clearly, FB has a few other concerns that should be thrown in there as well.
This issue basically has nothing to do with technology.
The way Facebook works is that they have all your data on their servers. That is the underlying flaw.
You want to have your data, you want the people you share it with to have it, but there is no reason for Facebook to ever have it. When you share it, it should be encrypted by you and decrypted by the end recipient(s).
You don't need to worry much about policies for accessing data that you shouldn't, and don't, ever actually have.
But you actually say it requires negligence on part of multiple people. I find it very hard to apply Hanlon's razor in this case.
Anyway, from my experience people do the stupidest/reckless things despite being told not to, just because there is no fear of backlash. Holding people liable for their actions could be a start, but who wants that?
Also, don't hire foolish developers in the first place. Policing around pebkac is hard.
Have you ever worked for an organization? 3 people making mistakes on a project, coupled with bystander effects, is par for the course. Being a million times wealthier isn't a million times smarter. We're still talking about how Boeing accidentally crashed a few jetplanes full of people, which required dozens of people to screw up.
I believe "move fast and break things" is a quote often attributed to Facebook. In this case the automatic importing feature might have been something broken, or something working correctly with an excuse that it was "broken".
If the feature was using the fact that the user supplied the email password, parsing the emails for contacts or logging in with the password and getting the contact list, how on earth could that have been a part of an earlier import contact feature? Did they already ask users for their email password for that? If not this is a feature that needed special code, impossible to be an accident.
> When you create a Facebook page for your business, you can import your email list of contacts directly into Facebook. From there, you can suggest your Facebook page directly to your customers. Facebook can interact with a range of email providers and only needs your email accounts's username and password to import your contacts.
> Type your email address and password for the Web-based email or instant-messaging service that you want to import into the dialog boxes and click "Find Friends."
Forget the contacts. People willingly gave Facebook their email passwords. Did Facebook also accidentally upload users' emails? Why would Yandex (from the screenshot) even permit this?
I hate it when I accidentally write some code to crawl email accounts for data and accidentally upload that data. Accidentally deploy that code to production, hide the opt-out button, and forget to post a disclaimer. Gosh darn it!
I'm just a mess without my morning coffee. If I don't get a good cup of joe in the AM I could do something reckless and random... like violate the privacy of millions of people! OOPS!
You know what I'm talking about!
Right!
...
right?
...
And if we consider Facebook's normal modus operandi: Today it's 1.5 million, a week or two later, they will say it was 15 million and 2 months later, they will say it was 150 million+.
Don't give access to your contacts, location, emails and photos to not just FB, but also WhatsApp and Instagram. If you must use them, try doing so from incognito browser windows. Facebook has proven time and again it cannot be trusted.
This is like the app version of "sorry honey, I totally didn't mean to stick it your butt but it was dark".
Facebook knew exactly what they were doing but they're playing dumb because it's less insulting to the recipient that way and they feel that will minimize the response.
I work in UX and this isn't unintentional. The copy "Facebook doesn't save your password" proves this was intentional. I'm sure the PMs there are all drinking the cool-aid and are rewarded in getting as much data from the user as possible.
Can someone use a throwaway e-mail address to sign up for Facebook?
Once the e-mail address is validated, is there any further need for a valid e-mail address to continue using FB?
Historical fact: Going back to the days when a university address was required, if the user created her Facebook account while at university and her e-mail address later expired when she graduated, FB did not disable the account.
Unless one wants to get notifications and other FB crud via email, AFAIK there is no need for a working e-mail address to use FB.
Yes. I use throw-away email addresses for everything. When a company gets popped or "accidentally" leak my email address, I simply add a header check and reject or discard them. I was on FB for 2 weeks when it started and I still see them in my logs from time to time trying to fish me back into the system.
I wanted to create a FB account while giving as minimal data as possible. While it's possible to create an account using temporary emails / temporary phone numbers, FB eventually asks you to submit more details.
This includes clicking verification links, uploading your photo, providing phone numbers etc.
Even when I managed to do all these (using fake data), my accounts got disabled in few days.
PS: when I used my email id associated with my FB account deleted back in ~2012, I found out it wasn't deleted. FB asked me to recognize pictures of my friends. So I believe no detail that ever passes the event horizon of facebook can ever leave it.
Just use a throwaway email account AND keep it? At some point they might decide to lock you out if you log in from a different place, I think it's better if you keep the email account safe.
I created a temporary email on my domain to use for the Facebook account creation and then disabled it so I stop getting spammed.. I can always re-enable it if I ever need to.
Kinda off topic but I find it incredibly worrying the lack of privacy people have online when it comes to advertising. There is creepy retargeting and then there is retargeting to specific individuals.
Right now I can find just about anyone’s email, seed them with an ad pixel, show them hyperpersonalized landing pages and follow them online knowing exactly who they are, allowing me to tailor ads to individual level.
WhatsApp on iOS recently updated, and now will only show phone numbers for contacts UNLESS I upload my contacts.
In the UI if I click on a number it will take me to the profile where I can see that users name ~Tom, but wow, waddamove... Have we reached the point where FB can't make any more money until they go deeper or is this just drag-net "data is the new oil"
It's the same on Android, with Contacts permission blocked it will show only numbers except for groups.
Furthermore it won't let you start a chat with anyone unless it can access your contacts to find them. However there's a great little app on F-Droid called 'Open in Whatsapp' that lets you start a chat with any arbitrary phone number.
To be blunt: when I'm hiring for a developer and interview who worked for Facebook as an applicant I'm going to have a lot of questions about exactly what they worked on. There's no way a feature like this created by accident, the developers who put it together knew exactly what they were doing, and did it anyway.
I find it astonishing people are still on fb, and even moreso that people that are still on there have the slightest expectation that their data will be handled with care & respect for their privacy.
If you have a bunch of photos 9f Mr. Zuck with a frowned, sad, confused, etc face, sell them! With the amount 9fbstories popping up, one could make a decent income out of them.
Why does it ask for your e-mail password to begin with? It is sad that there are 1.5M people out there (and probably more) that actually gave them their password. Scary.
Post-GDPR this is "unintentionally" and they try to make amends. Pre-GDPR that would just have been a "happy accident" and they'd just have swept it under the rug.
While they are deleting the imported contacts, that doesn’t undo any potential shadow profiles they generated, any training to their ML models that associate users (relationships), or any training to their advertising models. I believe Facebook doesn’t care about the contacts themselves. They wanted all of these collarary benefits that the general public will not be thinking about.
Honestly I don't understand why Zuck doesn't sell up at Facebook and use his considerable money and brains to move to philanthropy, like billg. His personal brand is going to continue to dive while he's the face of this bullshit.
Zuckerberg, like billg, has no interest in philanthropy until his mortality and his wife are is staring him the face putting the fear of the afterlife in him.
The selfish part of me wishes that the media would stop reporting on the endless procession of privacy violations / attacks by Facebook. It doesn’t seem to change a damn thing (Facebook revenue, DAU, etc seem to just keep going up). All it does it make me depressed, watching as we all just aimlessly shuffle pathetically toward some surveillance capitalism dystopia.
The incessant stories about Facebook are beyond tedious. I don't even know how to complain about this. I suppose it would be nice if we could somewhere officially label Facebook as dodgy rubbish, and abandon everyone who continues to knowingly use it to suffer the expected consequences, and never have to read another unsurprising article about them ever again.
At $40,000 per user per day [1], even at just one day of violation, that's a $60 billion fine FB should be liable for. "Under the settlement, Facebook agreed to get consent from users before sharing their data with third parties," so this seems to be EXACTLY in violation of that agreement.
[1] https://www.cnet.com/news/facebooks-ftc-consent-decree-deal-...
*Edit: on second thought, it should be even higher, as each of the 1.5M users had multiple contacts uploaded. So, for example, let's say 1 user had 150 contacts who were not part of the other 1.5M users who had contacts uploaded. That alone should be a violation of the consent rights of those 150 people, so $6 million per day. If every one of the 1.5 million people had, on average, 150 contacts exclusive of the other 1.5 million people who had contact info uploaded, that's a $9 trillion liability for one day of violation.
The FTC has been toothless on this for quite some time now, so I'm expecting no significant action as FB lawyers will defend that no one had data shared with "third parties," technically. Well, shouldn't my contact info shared by a friend with FB be a consent violation as FB is a "third party" from my perspective?