"reports last June and December that Facebook had given business partners — including makers of smartphones, tablets and other devices — deep access to users’ personal information, letting some companies effectively override users’ privacy settings."
suggests breach-of-contract. Just by creating those privacy settings, the company is 'saying' to (promising) the customer who elects to use them that it will protect that information. That's an (implied) contract. Subsequently allowing anyone else to access that information is a breach.
It'd be interesting to see FB argue in court that breaking promises was okay as a form of restitution for the services they provide.
I have no idea what charges the Eastern District of NY might be seeking pursuant to these data deals, but maybe something like mail/wire fraud or honest services fraud? Again IANAL, but those are fairly broad and the government could make the case that Facebook fraudulently breached its duties to its users.
scale matters on these issues as well.
and if you are doing it as a big utility company (e.g. banks) then you are free to go :)
One of the courses is over disclosure of information to third parties. This info isn’t part of my personal bau, so I might have it wrong, but:
There are two kinds of data sharing a bank does with affiliates, necessary for business (credit checks, appraisal companies, etc) that you cannot opt out of.
Then there’s everything else which you can.
It’s in legalese, but you can ask someone at your bank how to opt out of as much as possible.
It doesn’t help much, but it’s something I guess(?)
At least in the US, banks requiring all sorts of information do it not even as their own business choice -- they do it because government regulators insist that this data is collected because security, because terrorism, because money laundering, etc.
(They've since retracted that plan after the plan was met with widespread disapproval. I don't think they've tried again since.)
Turns out our society is really not designed for someone without a physical residence. It’s made for a frustrating year for me.
> Apple was able to hide from Facebook users all indicators that its devices were even asking for data.
I've seen a lot of discourse here seem to favor Apple over the big "G" but... this seems pretty shady.
Anyone else know anything about this practice or what specifically they might've been referring to?
This isn't Apple cynically hiding selling your data behind your back. Back when they had Facebook integrated it was so that you could share stuff easier with your friends. Yes it was a security risk because of how Facebook used the information. That is part of the reason Apple removed the integration.
It's just both a grave security mistake and a breach of trust to treat Facebook preferentially without letting the user know.
To me, it's indistinguishable from cynically handing over the user to Facebook.
The user knew. The user asked for it. People were howling for Facebook integrations. This was back before Facebook's privacy transgressions were as widely known.
It's also worth noting that the integration was designed to allow posting to Facebook from iOS. Facebook then took advantage of that opening to get additional data from the device. That's why Apple slammed the door shut.
It's not good. It stopped in 2017 when they removed Facebook integration.
Replicant drawbacks include:
* Starved for developer time (e.g., low device support, slow/no updates for known exploits, question of how many eyeballs seeing changes, no current semi-secure way to browse Web).
* Only kinda barely minimally almost works.
* The supported hardware devices are not only few and old, but have fundamental security weaknesses in hardware.
* Android is a huge code base, and, going forward, to some degree you're at the mercy of whatever the developer decides to do with it.
* You're trying to adopt a platform that was, to some extent, designed around surveillance of the user. You can disable a lot of snooping, but you would design it very differently if privacy&security were goals.
Librem5 is sorta an option, if you can afford it, though they still use closed hardware black boxes. (Also, even when two black boxes are "isolated", and you think you control the communication channel, you don't necessarily.)
It's too bad that FirefoxOS didn't work out. The Gecko layer and up could've been moved to a purer Linux stack (ditching the Android build tree monstrosity), and to more trustworthy hardware.
Personally, I'm hoping we go back to the original Linux emphasis on mainline kernel, maintainable open drivers, blob-free. The closest effort I've seen is PostmarketOS , but it's still in an early state, and it could also be a little more strict about compromise slippery slopes (e.g., the wiki can be misleading about what works on a device, since it might be talking about a non-mainline kernel, and perhaps they should ban kludges to run closed Android drivers). I wish I had more time to work on pmOS at the moment (but am job-hunting, and my new open source work has to be much more near-term employable than this).
Though I suppose a dumb phone would do, or maybe a LineageOS/MicroG-based smartphone.
Would this means that it will be criminal to allow companies to create alternative clients? That is really interesting development.
Do we know that to be true?
In any case, I'm pretty sure they don't bring criminal cases like this frivolously. There's obviously something here.
They were not charged for special API access? Still, FB might have used this as a negotiating tool as they dealt with Apple, Microsoft etc etc
It seems like Facebook somehow gave these partners rather deep access. To all users, not just those using those phones or who opted into the arrangement.
And for phone deals, criticizing friends/deep data access... If you want to build alternative FB client, your phone have to have access not just to your data, but also to your fiends data (i.e. all data you have access), in order to provide any useful service. So yes, technically your friends didn't agree that your alternative FB app can access to their data, but you gave your access credentials (and permissions) to access that data to your phone.
When another comment is wrong, please explain why so we can all learn. Alternatively, it's always an option not to post.
To be clear, we built the 'Facebook experience' for our device because only we really could. During this era, APIs were a disaster of a mess, moreover, the special API that made the device so special was not available to the public. Ironically it was our internal APIs that were making the special sauce!
For this purpose, Facebook provided users of the app we designed access to their own profiles. Obviously, this is a fairly wide API and it had to be made available specially for users of our app.
At no time did we ever have access to FB users private information. At no time did anyone even remotely suggest anything inappropriate or nefarious. There were simply no moral or legal discussions on this front because it was moot.
The situation, net, was akin to Facebook having hired a 3rd party to design an app for them, giving that app the internal FB API necessary to function, and then distributing the app.
This isn't an issue of 'times have changed' or 'looking back we'd have done something different' rather - I can affirm that there was simply no bad acting, no breach of individuals accounts, and no undue risk to individuals accounts.
Obviously this situation is very specific, and that conditions will have varied.
If FB was truly giving Bing special access without people's consent - this is a big problem.
The Cambridge issue - well - this is a tricky one because Cambridge merely took advantage of the API's the entire world had access to. There was little if any discussion of the inherent problems with those APIs, and when it looked like maybe they were being abused, Facebook did the right thing and closed them. They even went ahead and investigated Cambridge to ensure the data was gone, and Cambridge presented them with evidence that it had been deleted. I think in this case Facebook was a responsible actor.
Clearly there are more situations to consider, but we should be thoughtful in terms of how we approach newly released information and not get caught up with the mob.
Personally, I loathe the koolaid mob that built Facebook up, but I'm equally loathe the hate mob wanting to take them down.
At the time, nobody considered the issue to be hugely problematic, it was 'info on a site' and we treated it responsibly, but not like top secret data.
Also, we had no reason to want user data. Today, companies may or may not be able to use such data, but they all seem to be in the game of collecting user information as a systematic impetus. I think this will evolve.
What is NYT source for this story?
A leak about the existence of an investigation?
NYT journalist saw entries for grand jury subpoenas on PACER?
How do they know the crime has to do with data deals?
We must wait until complaint is filed before anyone can disclose the statute allegedly violated, correct?
Yes, there is a lot (more) we would like to know.
But this is as good a time as ever to do a little experiment regarding the practice of anonymous sources: at some point, we are likely to learn more about this investigation. Then, you can check if the information we have now was correct. Or, as the common accusation goes, it was a wholesale fabrication by the Times.
It sounds like a leak from within Facebook or another tech company on the receiving end of one of these requests.
I always thought grand jury proceedings are supposed to be secret. Maybe this explains the anonymity. I guess it is not unusual for the fact of the existence of proceedings to leak and for media to speculate? Does this have potential to negatively affect the outcome?
Not revealing sources is SOP
Most probably a US DA turning the screws on Facebook.
Negligence specifically does not, it only requires the existence of a duty of care (except not in specialized cases, that of reasonable care, which doesn't mean you are aware of risk, but that a reasonable person in your place would be.) Recklessness requires conscious awareness and disregard of risk, but not awareness of wrongness.
Harms from recklessness are mistakes, while recklessness itself is not, but negligence is quite normally a mistake.
But of course there's always the Computer Fraud and Abuse Act which is so broad that it could potentially apply to just about anything anyone has ever done via an internet connection.
A multibillion dollar fine? That's great, but even greater would be to put Facebook's exces behind bars. A CEO shouldn't walk away with a stuffed bank account after years of criminal offences, violating the privacy of millions of people all around the world and then not take any personal responsibility for it in front of our jurisdiction. The fine is attributed to Facebook, but there also needs to be a heft penalty for the people who ran Facebook and that is the executive team. Jail terms must be given. In the long term this will set an important precedent and detract possible future offenders!
What is with this place wanting to throw everyone in prison? Is there some thrill you get from seeing executives in an orange jumper?
Fines can be far more beneficial to society. Make them pay in a way that actually helps other people.
It's not an either/or. You can be sentenced with both a fine and jail time. Plus, I feel like the goal here should be stopping illegal activity (should this investigation find something, of course), and I think jail time would be more of a deterrent than a fine. Plus, we already have a system in place for corporations to benefit society from the earnings society lets them make: taxes.
Start putting executives in jail, and behavior will actually change. Isn't that the desired goal?
Jail. Jail for a looooong time. Only in negatively and significantly affecting the life or lives of those responsible will there be adequate justice.
Better question is, what is it with HN and rallying to the defense of unethical and illegal business practices?
They're sent to prison because it has a deterrent effect, in an industry where insider trading is easily accomplished by also hard to prove in court.
Same goes here. The public has no ability to audit FB's data collection and sharing practices. We simply have to take their word for it when they announce a new "privacy-focused Facebook".
If prosecutors can find a criminal charge that sticks, any conviction should lead to jail time.
To me the answer is; do both.
Criminals. The word you're looking for is criminals. And yes, most people like seeing criminals in an orange jumper.
I mean, as long as the crime made a lot of money, it can't really be a crime, surely?
I hope you apply the same rubric to shoplifting, drug posession, and the dozens of other non-violent crimes that get people put in jail regularly. Otherwise this is just a justification for keeping power unaccountable.
If you make the fine large enough to actually matter, it would probably affect people working at the company who had nothing to do with the illegal activity. So maybe you should figure out who said 'OK' to the stuff, and who implemented it, and then put them in prison. Maybe then other companies won't do similar things.
I don't know if it would work, but the sentiment doesn't necessarily come from moral indignation or "retributivism".
As far as fines being incentives, if the fine is large enough it will work. Make it so shareholders feel the pain and the board will hold the executives accountable. Then you'll get the necessary feedback to curb these actions.
Like, say, not going to prison? You know, the main incentive used for regular people? Just spitballing here.
I suspect FB have broken many laws but perhaps the country simply has inadequate consumer protection.
Advocating for jail terms without specific charges is not the direction to go.
The article very clearly states that there is a criminal investigation going on.
I don't see how its outrageous. We're (probably) not judges and we're not likely to be on the jury either. This kind of "I hope he goes to jail for a million years" posturing is quite common when people discuss any sort of criminal behaviour.
Is it beneficial or just? Perhaps not. But it's so natural that calling it "outrageous" is quite a reach.
I am not a lawyer, let alone a federal prosecutor. However, it does appear that actual federal prosecutors are suspicious enough that a crime took place to take it to a grand jury.
More importantly, judges do not pass legislation. They adjudicate law.
> the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C
which doesn't seem like it would be criminal?
Does the US actually have criminal laws regarding selling data? Any educated guesses on what's actually going on?
Violation of a consent decree can result in criminal contempt-of-court charges. See 18 U.S.C. section 401 (https://www.law.cornell.edu/uscode/text/18/401). See also United States v. Schine, 125 F. Supp. 734 (W.D.N.Y. 1954).
How might this affect an FCC or breakup conversation?
Because other people are doing bad things, it's OK for Facebook to do bad things.
I'm not sure that's how the law works.
That's literally how the law works. What "bad things" we prohibit people from doing is determined by law.
We don't have laws regulating the gathering and trade of data on populations or individuals.
> Privacy advocates said the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C., stemming from allegations that the company had shared data in ways that deceived consumers.
That (among other things) is what's landing Facebook in legal trouble.
The 2011 FTC consent agreement itself does carry the force of law, which means that if Facebook breaks the terms, it's breaking the law and penalties can be assessed.
I was aiming at the notion that "this isn't how the law works".
With the 2011 FTC consent agreement, there is plenty that Facebook can do with customer data that will land them on the wrong side of the law.
Investigating and preventing unfair or deceptive acts/practices affecting commerce is a big part of what the FTC does: https://www.federalreserve.gov/boarddocs/supmanual/cch/ftca....
Feels like the right solution even if the task is monumental.
> Hu-manity.co has ... designed new intelligent contracts on blockchains which humans can use to negotiate new terms of consent and authorization with corporations so that inherent human data can be respected as legal property.
It's great that people are serious about this, but extreme care should be taken with the latest tech hype! Blockchains are (by design) are 'out there', which means that once consent it given, that consent is 'out there' for all time. Blockchain also rather public, so it would seem rather easy to 'harvest' who has consented to what...
... unless I missed something?
With that said, I don't know if that is applicable here as I'm not even aware of what the potential charges might be.
Violation of a consent decree can result in criminal contempt-of-court charges. See 18 U.S.C. section 401 (https://www.law.cornell.edu/uscode/text/18/401). See also United States v. Schine, 125 F. Supp. 734 (W.D.N.Y. 1954).
I didn't say otherwise.
> The code in question allows for such imprisonment - that's why it's called "criminal contempt."
I don't disagree. What I said is that breach of a consent decree is not necessarily a case of criminal contempt. And whether or not it does rise to the level of criminality will be determined by the judge, not the investigators. Therefore, referring to it as a 'criminal investigation' does not make sense if the investigation is strictly investigating breach of a consent decree.
A) The base rate of consent-decree breaches resulting in a criminal contempt charge is high, which I very much doubt that it is.
B) There is some information about this particular case that makes a criminal contempt charge likely. This could be true, but the article makes no attempt to demonstrate it.
Failing one of these things being true, I don't think it's fair to refer to it, at this stage, as a 'criminal investigation'.
I’m no fan of FB’s extensive use of dark patterns, but the concerted attack on FB is meant as pressure so that Zuck agrees to let FB be used as a great firewall.
Thank you, Zuck, for resisting the pressure if indeed you have. It will only increase as 2020 approaches.
Maybe there would even be less cruft and bloatware when externalities were accounted for.
229 If a builder builds a house for someone, and does not construct it properly, and the house which he built falls and kills its owner, then that builder shall be put to death.
230. If it kills the son of the owner, the son of that builder shall be put to death.
> It's already been reported that there are ongoing federal investigations, incl. by the Dept of Justice. As we’ve said, we're cooperating w/ investigators and take those probes seriously. We've provided public testimony, answered questions, and pledged that we'll continue to do so
Second, Facebook is not currently "accused of broad criminal activity". They are being investigated for breach of a consent decree, which may lead to criminal charges, which in turn may be broad.
In public testimony, imo Zuck came across as a smug mob boss intent on accumulating power without bound. The recent pivoting, conspiracy theory rumors that shall remain nameless accusing Zuck of rogue cia cooperation, news releases about criminal shenanigans surrounding data sharing coinciding with the excessive global downtime yesterday, and now his consligere departing, all look suspicious.
At the least, it’s an obvious monopoly controlling a significant cultural aspect of a global social graph. Now they are moving to undo the messaging unification. Last week they wanted gossipers to have more privacy so they can gossip and get away with it better.
I admit to being biased, but given my direct research on how data moves around the Facebook world whether a member or not, whether you have blacklisted any fb domains using little snitch etc, disable all remote js, doesn’t matter, they keep tabs on you somehow. they are essentially their own intelligence community with unchecked power and reach
The accusation from Voldemort is they had or have secret deals with telcom etc which if true is beyond insidious. Ianal but common sense wise if that’s true, they should be broken up and not simply by undoing messaging unification.
Is Facebook trying to delete incriminating evidence? It wouldn't surprise me one bit if they did that. I hope the prosecutors are smart enough to look for evidence of this, even though I'm sure FB's experts will try to leave as few traces as possible.
redhat, ibm, (linux, java, etc) for starters...
I mean there's Poe's Law and then..
That's such a poor excuse. Lead by example.
Cmon, they are surveilling billions of people, selling, sharing, and using our data en mass. They monopolized internet, search and digital advertising. And I need to sympathetic to their situation?
Since when is Google is the owner of the Internet?
I don't think Facebook is destabilizing the country. In fact I don't think anything bad at all is happening. Like please explain to me your negative repercussions from Facebook sharing some of their data with Amazon. How did that undermine the country? People have gotten so hysterical about this topic, it's a madness, and that is what's destabilizing the country IMHO.
Like OP is calling to put some of the most brilliant minds in Silicon Valley in jail, which is ridiculous! But if anything like this should happen, I think you gonna see FAANG & Co seriously consider rebasing outside U.S. (well maybe not G)
umm.. massive and hugely polarizing public influence campaigns, if extant, aren't destabilizing?
> Like OP is calling to put some of the most brilliant minds in Silicon Valley in jail, which is ridiculous!
plenty of brilliant sociopaths out there.. brilliant != good.
It is possible to be a capitalist and act morally. Rates of crime are at some of the lowest levels of all human history now.... is that a sign of immorality?
Morality is about making responsible choices.... often tough ones. Abdicating responsibility to a deity or a monetary system is the opposite of what is needed for morally responsible decision making.
Just baseless speculation; I dont work for Facebook or have an inside source.
But employee burnout (and the nascent mass exodus of top talent we'll inevitably be hearing about shortly) may very well be.
I simply can't sell myself to an immoral organization where, I’m afraid, any concerns of mine would be drowned out, ignored, or silenced by the incumbent powers at Facebook that are dead set on their current path.
There might be good engineers at Facebook trying desperately to change the culture, but I encourage others to resist fighting the (in my view futile) good fight.
Facebook is doing so many shady things that you would likely be able to verify that your fear would be proven correct, but many other companies do shady stuff too and you'd be none the wiser simply because they don't do enough of it to trip your alarm. The amount of stuff going on that can't stand the light of day is usually roughly proportional to company size, or, if the management is at all enlightened non-existent because they instruct their employees to not just follow the law but to do what is right. That's pretty rare though.
As a further anecdote, I just had a FB recruiter contact me about an ML position three months after they rejected me and I told them never to contact me again.