Hacker News new | past | comments | ask | show | jobs | submit login
Why We Disagree with The New York Times (fb.com)
346 points by sqdbps 9 months ago | hide | past | web | favorite | 304 comments



Facebook’s response seems perfectly reasonable to me.

- Obviously in order to integrate FB functionality into a Mobile OS UI requires an API to render the data being displayed.

- If your phone has a home screen widget which shows friend data, obviously that friend data came from a Facebook API call.

- If you type your Facebook username and password into a settings dialog in order to enable that home screen widget to function, that’s pretty obviously consenting to enable the functionality.

- There once was a day when we demanded our social media platforms to provide these “open access” APIs to specifically allow for accessing our own social feeds on our own devices.

We trust our user agents to render our private information on our devices. Sometimes we even trust our user agents to leverage network services to improve the on-device performance (i.e. Amazon Silk)

If and when user agents exfiltrate our personal data off device for data mining purposes (i.e. Chrome Omnibar) it should be disclosed and opt-in.

It sounds like Facebook provided an API to device manufacturers to allow them to deeply integrate social features on device. This has historically been considered a Good Thing. It sounds like they put together a legal agreement that required these device manufacturers to take due care in implementing these features to protect user data. Also seems like historically this is what we would call a Good Thing.

When you enter your username and password in order to view your Facebook feed — that’s called a “user agent” and that’s something appreciably different than a third party quiz app sucking in friend feed data.

However, Chrome Omnibar aside, user agents are not expected to exfiltrate data in any way, and if that occurred, that would indeed be a story I’d like to read, and my ire in that case most certainly wouldn’t be directed against Facebook.


My email client has full access to the contents of my email but the author of said client has none. Although my client has credentials these are stored locally and like my email never communicated to the creator of my client.

If the device makers applications merely fetched data on behalf of users and displayed it on their machine it would be no more of a problem than my email client.

In the first place it looks the apis provided access to data the users had opted not to share in the second place the times article seems to state that partners partook of that data themselves rather than merely acting as a client example

"Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers. A Facebook official said that regardless of where the data was kept, it was governed by strict agreements between the companies."

Before deciding if facebook's response was reasonable did you even bother reading the times article?


> "Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers."

This statement from the article is meaningless, as many legit things might mean "third party storing data" as: 1.) Storing and editing contact imports, if user wants. This is technically friends data, but it's my contact list. 2.) Proxy-ing and caching: we are talking about shitty phones mostly before android and ios were mainstream, so "store on their servers" could be as simple as an artefact of implementation of non-html facebook client app on that shitty phone. Example of such artefacts: custom notifications channels, caching, downscaling of images. I think that blackberry did proxy all of their communications through their servers (not 100% sure), so if Facebook was available, they probably also had to store something on their servers.

The journalist didn't even attempted to distinguish between "my device is calling this api" and "company is doing requests" and resort to conflating those two and ambiguous "some partners did store users". This is example of journalist trying to create a story instead of getting to truth. If they would dig deeper, and try to figure out which companies stored data, what type of data (was it contact import, caching, or did they download full graph?), and what was purpose, it would be valuable article.


The APIs provided access to data that had been shared with the person who was logged into the device, but not with random Facebook apps like Farmville. The New York Times' argument is essentially that the flag which stopped Cambridge Analytica and Zynga getting access to your data should also have stopped your actual friends from viewing it except through installing Facebook's official app.

It's hard to see any way in which making users choose between sharing their info with every dodgy quiz their friends use and forcing their friends to install the Facebook app and let it get its tendrils into their devices in order to interact with them would've been good for privacy, but this is exactly what the NYT is insisting Facebook should've done.


> When you enter your username and password in order to view your Facebook feed — that’s called a “user agent” and that’s something appreciably different than a third party quiz app sucking in friend feed data.

I think we're trying to make this different in hindsight, but APIs for this sort of thing are the replacement for "please give my app your Facebook password". I hope everyone remembers how Facebook used to ask users for their GMail password and how Mint (still) asks users for their banking passwords.

And if what you are doing is "giving an app access to my account", then letting them see everything you see is a pretty natural API.

I think over time we have realised that users want more fine-grained permissions so they don't have to make such difficult decisions of whether they "trust" an app, but I know that as a developer and user I get pretty annoyed when other software cannot interact with an app for me because the API is too limiting.


Exactly, this seems to be coming from people who don't fully understand or appreciate the history or social/technical/economic environment from which it came about. Then they are retroactively applying our relatively recently evolved demands for fine grained control of data.

This type of thing in journalism really concerns me with the recent moral panic against 'fake news' and the attempts to create systems which define it. When what is 'true' or factual or not is so often subtle and easily spun/twisted. Which is ironic in this context because Facebook and NYTimes are both leading players championing these efforts.


> Mint (still) asks users for their banking passwords.

In defense of Mint, most banks are severely behind the curve when it comes to authentication, and there would be no way to provide that service otherwise.

Where the financial institution has a better API, Mint happily supports it. (Interactive Brokers, IIRC, is an example of this.)


> There once was a day when we demanded our social media platforms to provide these “open access” APIs

And we still demand them for other types of apps.

If I have a Mac or iPhone, I can connect the Apple-supplied Mail app to Gmail with IMAP or POP or whatever it uses. If I have Windows with Outlook, I can connect it to whatever mail server too. And this software gets access to all the content of my emails, which is private data.

Likewise, on a smartphone, I can install a third-party app to access Hacker News or Reddit. Because both of them have an open API. (In fact, for a long time there wasn't an official Reddit mobile app, and they encouraged you to go third party.)


It occurs to me that in addition to the usual list of permissions for apps, perhaps we should also be presented with a whitelist of domains the app is allowed to communicate with.


Device manufacturers can also of course trivially bypass any way facebook tries to hide this data from them as well if they really want to. They control the OS and it's implementation, nothing can be hidden if they really badly want to get it.


Your phone manufacturer can also theoretically just watch your phone's display output over VNC if they had that kind of control over your device. If you believe your device's manufacturer is deliberately working to actively steal data from third party apps on your phone, perhaps you should consider a different phone.


Yeah the problem that keeps, and keeps and keeps coming back is that cloud based software gives too much control to the software authors.

Because that's where the ground of your argument sits: because of server-side "rendering" the data must be sent to the servers of the software author, in this case Huawei but doubtless many others, instead of to the device that actually displays the information.

... and that's exactly why everybody's doing cloud based software. Because it allows them to abuse this. And once they've got the customer by the ... they ask money for that.

Cloud based software is how the author of a todo-list app can hold users' data hostage more effectively, more detrimetentally to the customer than Microsoft's monopoly that was blown up in the cause of justice.


I had a Windows phone that used these API's. When Windows Phone 7 came out, Facebook didn't create the Facebook App for that OS, Microsoft did. It looked like what you expected Facebook to have built and it worked like the Facebook App that other platforms had. The whole purpose of the API agreement with Microsoft was to guarantee that the Facebook App they wrote would still work years down the road. Microsoft did not have access to my data. They were allowed to write an app that allowed ME to access my data.

The NY Times article makes it sound like Microsoft was allowed access to my data. They were not. They were allowed to create an app that had access to an API so I could access my data from the Windows Phone device.


The NYT article says that Facebook told them that some partners stored that data on their servers. Do you have a good citation that this does not refer to these device integrations, given you so strongly claim it doesn't? (It's very well possible that it's a misleading quote by the NYT, or a misunderstanding, but I don't see a clear answer in any of what we know) That's from my perspective the critical point here.


No I do not. I do know that the Windows Phone version communicated directly with Facebook due to network traces I had done back then. It did not use an intermediary. I could see some devices using an intermediary, especially for under powered and "dumb" phones where they may have made a simplified web app that would be usable on their platforms where as the actual Facebook web app would not have been. Think flip phones that had Facebook integration.

I don't know if that happened or not, but the way this article reads is very misleading and several of the vendors that they listed did not actually have access to the data.


Surely that applies to every API in existence, for example, on the HTTP API I'm using right now to post this comment, I know that Mozilla stores some of my data on their servers (through FF Sync), and HN know that's possible, but it's not up to HN to ensure Mozilla aren't taking that data and selling it elsewhere.


HN didn't design the commenting API specifically for FF Sync to use though.


On Windows Phone 7, Facebook integrated with the Contacts app, so your Facebook friends would be added to your contacts, and their images would be added to their contacts if they didn't have one set. That's probably what it is, as that would then sync with your Outlook account for those contacts


It sounded like a number of devices provided contact syncing with Facebook as well as the ability to backup your contacts and data to the device manufacturer servers. Now, they might have been storing other data that they shouldn't have, but backing up synced FB contacts by itself seems pretty tame.

I personally setup FB to do contact syncing with my Google contacts so I could get the FB profile pictures for all of my contacts. Does that count as Google improperly using FB user data? I don't think so.


If you sign in to facebook in Opera mini, opera will store your facebook data on their servers. How much does that upset you?


Sounds like closed source software to me.


I unexpectedly agree with FB here.

If NYT are correct then we can kiss goodbye to APIs that are used by any services that are not explicitly written and signed by the service provider. In the extreme that means you won't be able to log in to facebook on the web, only via a facebook app, because there's no guarantee that a 3rd party web browser isn't stealing data. That goes for any and every service dealing with personal data, and we pretty much lose the open web.

I want to protect user's data as much as anyone, but if a user deliberately installs a 3rd party app and enters their credentials into it, then they are consenting to that having access to their data under that app's privacy policy/terms. This should be obvious to all users, especially where GDPR advice has been implemented.


> In the extreme that means you won't be able to log in to facebook on the web, only via a facebook app

Taking that even further, if you really want to avoid the need to trust anyone, you'd need to run an operating system created by Facebook running on hardware created by Facebook. Otherwise, the OS and hardware vendors of course have access to your personal data. I guess it's all a spectrum, and certain hardware/software vendors you just need to trust (or at least assign blame to them, not Facebook, if they maliciously steal your data).


> That goes for any and every service dealing with personal data, and we pretty much lose the open web.

Some might argue that centralized walled-gardens like Facebook are actually a risk to "the open web," rather than contributors by allowing access to some data via an API.


The limited responses so far in this thread only reinforce many companies decisions to pull support for public apis. How many years have developers been complaining that twitter and facebook have been restricting access to apis for 3rd parties, but now all of a sudden they're evil for ever offering apis to begin with?


There's nothing wrong with offering a public API. There's everything wrong with offering a secret, non-public API where you give a third party I didn't consent to full access to all of my data.

See how that's a bit different? One of them is offering public information (say, stock quotes), publicly. One of them is offering private information (my sexual orientation and date of birth) to tons of third parties I didn't consent to having that information shared with.


Surely it is only giving blackberry the information if you type in your login credentials on a blackberry right?

I don't see how the situation is actually different now: if you run the official Facebook app on your Galaxh phone then Samsung could scrape and exfiltrate the data anytime it wants. It is Samsung's fault if they do it not Facebook's.


According to the New York Times article [1] the third parties were able to obtain information about you if a friend signs in on their phone, no consent from you required:

> Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.

[1] https://www.nytimes.com/interactive/2018/06/03/technology/fa...


Not siding with FB here, but their response appears to address this:

> Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.

Maybe the confusion is that FB isn't treating these integrations as "third parties" since they're supposed to be pseudo-official FB apps?

Edit: Thinking about it further, this is the crux of the matter, isn't it? Whether or not an FB-approved "mobile experience" counts as an official FB app or a third-party app?


Sharing with friends is not the same as with data mining third party apps such friends install


Of course they can, though. Are you surprised that to see data on your phone (such as images/text your friends share with you and nobody else), your phone needs to be able to pull that data from the Facebook database? How else is it supposed to work?

If your phone saved it or your provider saved the data in a server somewhere without your permission and used it for other purposes, that's a pretty big deal. As it is, this should only be a shock if you have no idea how the modern internet works.


How is this different from your friends putting your contact information into a cloud synced contact list? You shared your information with your friends, it's not Facebook who decided how your friends use that information.


I log into the Facebook app on my Blackberry. I can see my friend's data in the same way I could as if I used the Facebook website.

The only difference is that the app on the phone has been coded by Blackberry, and the JavaScript code in my browser has been coded by Facebook, except of course for the banner part that my ISP injected, or all the JavaScript code that third parties injected, and who are thus functionally equivalent to the code written by Blackberry.

In this scenario, no third party had access to anyones data, right?


The quoted behavior appears to essentially be the behavior of browsing your friend’s profile. Does the website itself also violate trust by allowing this behavior? Should Obama or Cambridge Analytica simply have violated the ToS to get users passwords to scrape facebook.com?

I don’t really care; if you install the facebook app you deserve whatever it does.


If Samsung secretly runs a keylogger on all their phones and then steals your data that way, that's a Samsung problem and you should be angry. That is not what happened here.

Facebook officially blessed other platforms, allowed them to say you were signing into Facebook, and then allowed those platforms to have access not only to your data, but your friends exhaustive data. Facebook was aware of this, explicitly allowed this, and their only safeguard was vaguely worded policies with their partners.

Even if the partners haven't stolen any data or broken their policies, Facebook intentionally and knowingly gave my data to a third party without my consent. Presumably now all 66 of these device partners have my full data, and I did not sign in on their device or otherwise authorize it.


“and then allowed those platforms to have access not only to your data, but your friends exhaustive data.”

Only for purposes of allowing the user to interact with Facebook. The platforms themselves couldn’t use the data for anything other than supporting user actions.

This is perfectly reasonable and is why APIs exist.

The issue, which has not been repeated, would be if those platforms retained data without user consent and used the data for other purposes. That would violate the contract they signed with Facebook and be bad for use privacy.

To a non technology person, these two very different things sound similar (Facebook let Microsoft access use data). But this is meaningless without more context.


Facebook officially sends arbitrary friends data through the Android/Samung APIs in the normal course of operating the official FB app. It likely also locally caches a lot of data in a format and on-device storage that Samsung could arbitrarily access. The only thing stopping Samsung from stealing all of your friends data is legal and social, not technical.

Is there any evidence that the API was actually used to arbitrarily request and exfiltrate other users profiles? If not, this is strictly only "Facebook allowed other developers to write Facebook apps": there is no way that could possible work without offering an API with this level off access to the app.


No, it is giving Blackberry the information if any friend of mine offers THEIR login credentials.

I friended my cousin because she is my cousin. I didn't think about what kind of device she uses. I certainly didn't intend for her to hand my personal information over to a company that I don't know.


It's hard to believe you're asking this question in good faith. I'm sure you understand there's more to it than "APIs are good" or "APIs are bad", obviously the purpose of the API matters. If Twitter offered a public API for information that users hadn't realized they'd made public, that would be bad. When they pull support for an API for information everyone intended to be public and searchable, that is also bad.


Offering an API is great. Just don't expect developers to voluntarily restrain their use of the API according to some words you put on a web page. Maybe most will do so, but a significant portion will growth-hack their way right past whatever policy they agreed to. If you don't want someone to do something with your API, you better make it impossible to accomplish using said API.

The problem is in offering an API that makes abuse trivially simple and putting all of your faith in the "click agree" model of expecting developers to (a) understand and (b) actually comply with your policy. Particularly when many of the developers in question are non-native English speakers, or even if they are native speakers and don't bother to read the policy carefully.


I'm no huge Facebook fan - hardly use it these days, but...

These apps are just alternative Facebook clients. Don't we _want_ a system where you can use different clients to access your own data?

If the problem is not trusting the client, well, that'll be a problem for any such system, even some utopian fully open, distributed and federated social network - until you build an open source client yourself.


I think the problem is Facebook's lack of transparency here. As a user, suppose I have some information that I've marked as "not available to third-party apps". Then I might be surprised to find out that Blackberry's servers have access to that data via a secret API, even if it is to implement a "Facebook experience".


Would you also be surprised to find out that Mozilla (Firefox), Google (Chrome), Samsung (Android Browser), Microsoft, etc. all have access to that data?


It seems that FB gave to those clients an API to access more data than the user allowed. The Blackberry used by the Times got data about the friends of the journalist even if those friends didn't consent to that.

One thing is allowing the official FB client to see those data (FB has those data anyway), another thing is to let third parties see them and possibly store them on their servers and not only on our devices.

This is different from email and email clients. First, the expectations are different: if I send you an email I expect that you can forward it to your friends or anybody else unless I explicitly ask you not to. Second, local clients don't send mail to their authors, same for address books. Third, we know that Google and others can see most of our mails anyway, because most people use only webmail and messages are stored on the servers of those companies.

Finally, FB didn't tell us about this API and what it can do. It's this secrecy that's hurting them IMHO. Sure, I concede that they couldn't foresee the current climate around their company when they made the choice not to advertise it, or we wouldn't be in this situation now. But we're here also because of chain of bad choices from their side.

My suggestion for a social network of the future is to have a single API, used also by the official client. The servers must not trust any client, which is the usual thing we do in web development, and give all them the same level of access. It's up to the user to decide if they want to use the official client or one of any third party.


I'm not sure this is a fair reading.

> The Blackberry used by the Times got data about the friends of the journalist even if those friends didn't consent to that.

They did consent to that - by becoming your friends! Are you saying that every time you open your friends' Facebook pages, a notification should be sent to those friends requiring consent?

I think perhaps you meant something else - the Blackberry got data about the friends even though the user did not consent to that:

> The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.

This is also questionable. Would you say that Chrome requests and receives prohibited data, when you use it to browse your friends list? We are talking about the distinction between client(full access) and third-party app(limited access). The cases described in the NYT article seem to be clients.

Further on your point:

> This is different from email and email clients. First, the expectations are different: if I send you an email I expect that you can forward it to your friends or anybody else unless I explicitly ask you not to.

You seem to be making an argument against your claim here. The parallel would be, if I accept your friend request, I expect that you can see my data and use it(i.e. by browsing your friends list on the Facebook site, or the Blackberry client).

> Second, local clients don't send mail to their authors, same for address books.

Perhaps, but doesn't that hinge on the definition of "local clients"? For example, my Outlook definitely shares information with a cloud server. Was "The Hub" an unknown/unexpected feature of the Blackberry client?

> Third, we know that Google and others can see most of our mails anyway, because most people use only webmail and messages are stored on the servers of those companies.

Are device manufacturers not included in those "others"?

The email client parallel with Google is even more in Facebook's favour. For example, is Mozilla stealing data about my friends when I use Thunderbird to access my gmail account? What if I explicitly ask it to store my emails in a "hub" of sorts, so I could sync them between PCs?

> Finally, FB didn't tell us about this API and what it can do.

I think this is the strongest point, but it's important to note that we are judging Facebook's old decisions by our new increased focus on privacy and user-focused control. As one user gave an example, we used to give our passwords to sites back in the day, so they could integrate with other services(actually, this still happens in some apps..)

> My suggestion for a social network of the future is to have a single API, used also by the official client. The servers must not trust any client, which is the usual thing we do in web development, and give all them the same level of access. It's up to the user to decide if they want to use the official client or one of any third party.

But isn't this literally what is happening here? The "secret" API does not have access to any data the "official" one(used by the site) doesn't(at least, the NYT does not present any evidence to that effect). You also seem to access it by giving your credentials to the "third party client", i.e. no "special access".


Crazy that everyone in here is defending Facebook.

- How on earth does Facebook justify giving direct API access to information that users have, in every setting possible, marked as private?

- How on earth does Facebook justify offering deep API access on users who have literally disabled API access to their data?

It's ridiculous, and it's more ridiculous that users here are conflating "basic API access with a sane permissions system to give you control" with "deep API access with no privacy controls whatsoever that openly defy existing privacy controls".

It's not acceptable, and frankly, this is EXACTLY why government regulation of data online isn't a possibility, it's an inevitability. Because when the penalty for ignoring the user's selection of "DO NOT MAKE MY DATA AVAILABLE OVER THE API" and "DO NOT MAKE MY DATA AVAILABLE TO FRIENDS OF MY FRIENDS" etc is now billions of dollars in damages and potentially criminal charges for executives, magically, these violations will stop occuring.

Until then, everything in this document is either a lie or sufficiently legalese'd that it's worthless, just like the lies that they told to Congress, just like Zuckerberg's lies to E.U. as well.

I cannot wait until it is a crime to share private user data against their will. We live in a wild west and the past 10-15 years are proving just how much sheer damage we have caused in society by not criminalizing disrespect of digital privacy.


“How on earth does Facebook justify offering deep API access on users who have literally disabled API access to their data?”

Here how it seems really non-crazy to me as a programmer. Let’s say I make phone with a Linux OS. I want to make an app to let users check Facebook on my weird OS. I ask Facebook, they say no. I then build an app to call their API and let users do stuff.

All my app does is call the API and show it to the user. In order for the app to function it has to use private data. But the data are not retained or analyzed, just used to operate the app.

This is how all 3rd party apps work. The APIs were not for the general public, but only to trusted third parties. The alternative is that only Facebook can build and show apps.

So this is pretty much how APIs have worked forever and why you only install apps and log into apps you trust.

It would be like complaining that Adobe Reader can read private files from your laptop via Windows/Mac APIs. Of course it can, this is good. It’s bad of Adobe were to misuse the data.


Private means that I (the user of my own FB client) can see my private data. That's obvious. What must not happen is that I can also see data that my friends, or their friends, marked as private. This doesn't happen in the official FB client and must not happen with any API.


And it didn’t happen with FB’s API. Your friends could see their private data. You could see yours. The apps were not able to show your private data to friends and vice versa without breaking their contract with FB.

It would be like being concerned because both you friends both use Chrome to access your individual, private bank accounts. Google accesses your private information, with your permission, to display data on your screen. It’s only bad if they misuse that data.


You do realize that facebook will share user's private information with any random web browser that the user enters her credentials in, right? Is that "ridiculous"?


If a friend of one of my friends, who I am not connected with, can acces my private data despite my settings being clearly limited, with API access disabled, "friends of my friends" settings disabled, as was reported is possible, then yes that is ridiculous.

However your reply completely fails to grasp the problem, like so many others. It's not the fact that I type in a user/password and get my data, it's that, I type in my user/password and Facebook authorizes access to as many as 10,000 to 100,000 other accounts data, based on proximity to my account.


> your reply completely fails to grasp the problem, like so many others.

That's because you're just making up a problem that doesn't exist. The replies are all talking about what is actually happening, not your fantasy world. As the article states: " friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends".


I understand why my iPhone needs access to my Facebook friends' info, in order to display it to me.

I don't understand why Apple Inc. needs access and permission to download and store my friends' data on Apple Inc. servers. That is what the NY Times reported, and Facebook has not directly denied it here.

Edit to be clear: I'm just using Apple as an example company.


Do you know about the Amazon Silk web browser? If you log in to facebook in that browser, Amazon may download and store your friends' data on Amazon's servers. And that's even without any agreement with facebook.

I can think of a handful of Facebook-like experiences that would require an OS/device provider to store data on their servers, especially considering the constraints imposed on some devices (particularly older ones) and some OSes (particularly older ones and iOS).

Let's take a concrete example. Let's say that Apple wants to support contact syncing across devices. It also wants to support contact importing through fb's device integrated api.

Now, let's say that Apple actually implements that in such a way that it's encrypted end-to-end. Even in that case, Apple may have needed permission to store that data on their servers under whatever agreement the companies had and the Times could have written their story.

But I feel like that case isn't that interesting. Let's consider the case where your contacts are stored unencrypted on Apple's servers, but only for 3 hours while actually syncing to a new device (kind of a silly approach to the contact syncing case, but is likely more reasonable for other things). There may be a reasonable argument against that, but I think that most people wouldn't agree with it. Also, that argument would apply much more strongly to Android's contacts permissions (which don't require any sort of contract around how developers use/store that data).


I run websites that get traffic from Amazon Silk browsers. I've never signed a data sharing agreement with Amazon. There's more going on in this story than a simple browsing proxy.


> I run websites that get traffic from Amazon Silk browsers. I've never signed a data sharing agreement with Amazon.

Of course not. Because the fact that a user agent like this has widespread access to the data that the user has access to is expected. The OS itself also has at least that same level of access (as it has control over the behavior of the user agent). We just all seem to assume that we can trust those entities to not misuse the data (in some cases the particular software we run may have privacy policies that cover how they use that data that we necessarily give them access to).

> There's more going on in this story than a simple browsing proxy.

There's nothing in the report that indicates that. In fact, most "simple browsing prox[ies]" have no data sharing agreement limiting how they use the information from your browsing, in the cases in this story they were all limited to "signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences." and even further "our partnership and engineering teams approved the Facebook experiences these companies built". In addition, it sounds like in most of these instances, no data was even accessed or stored off the device (i.e. they had much less access than a proxy).


> We just all seem to assume that we can trust those entities to not misuse the data

I don't need to trust that entities will not misuse my data; I have legal documents on my websites that set out how entities may use the data that they download from my sites. The terms apply to my users and to any intermediary technology providers.

For the Silk browser, I don't need a data sharing agreement with Amazon because my terms say that service providers like Amazon may not copy and store data from my website for their own purposes. Amazon Silk can access my site only for the purpose of helping the end user visit my site. I expressly exclude data sharing from my relationship with service providers and users.

This is a standard part of website terms and conditions; here's an example from Facebook's terms of service:

> You may not access or collect data from our Products using automated means (without our prior permission) or attempt to access data you do not have permission to access.

And from their platform policy:

> Data Collection and Use: If you are a Tech Provider for an entity, comply with the following:

> a. Only use an entity's data on behalf of the entity (i.e., only to provide services to that entity and not for your own business purposes or another entity's purposes).

Data sharing agreements are only necessary when another entity (i.e. Amazon the company) wants to store and use data separate from, and in addition to, the service they provide to end users.

The existence of a "data sharing agreement" is proof that device manufacturers were collecting and storing user data, not just facilitating user access to Facebook. That's what "data sharing agreement" means. Further proof is that Facebook explicitly said that some companies collected and stored FB user data.


On my iPhone, I enabled a feature that would sync my contacts with my address book. My blackberry did this as well way back in the day.

My iPhone is also syncing with iCloud.

Now my friends “private information” are on Apple’s serves via my iCloud backup.


Unfortunately, the NYT article is vague on how each of the approved device makers were using the data -- they only say "some partners" stored the data. But it doesn't seem like Apple in particular used the API in that capacity. From the article: "An Apple spokesman said the company relied on private access to Facebook data for features that enabled users to post photos to the social network without opening the Facebook app, among other things. Apple said its phones no longer had such access to Facebook as of last September."


I find it equally ridiculous that Hotmail can access my private emails that I send to a Hotmail user. Which is to say not at all.


> - How on earth does Facebook justify giving direct API access to information that users have, in every setting possible, marked as private?

The user logs into Facebook on the device. This log-in action the part of the user is effectively permission to share data with the device.

> - How on earth does Facebook justify offering deep API access on users who have literally disabled API access to their data?

When the user logs into Facebook on the device, they are giving permission for their data to be transferred via an API to that device.

I do not understand your outrage.


>The user logs into Facebook on the device. This log-in action the part of the user is effectively permission to share data with the device.

As well as the data of 10,000 to 100,000 connected users as both friends and friends of friends have data pulled without any checks by the third party.

"When the user logs into Facebook on the device, they are giving permission for their data to be transferred via an API to that device."

When the user selects to have their data limited to friends only, and not friends of friends, then additionally disables API access to their data explicitly, no, they are not "giving permission for their data to be transferred via API".

They are quite explicitly doing the opposite.

My outrage, stated multiple times and oddly invisible to the users here, is that my user/password doesn't authorize the third party to access 10,000 to 100,000 other users private data, and those users personal settings should override any third party data hose utility.


This is the same they said about Cambridge Analytica initially... but you can't just blindly trust others (in this case apparently 60 companies got priviliged access). If it's technically possible, then someone will do it. When your data is gone, its gone.

How naive is Facebook really?


They're not naive at all. When it's not in your business interest to manipulate reality, you manipulate perception.


I think it's a more complex picture. I did my own deep dive after years of reflecting on this:

Inside the Bubble at Facebook

"Management will laud what employees do, show them selective facts that justify their views, and hire/promote those who behave similarly to them. Employees in isolated teams with training in a single function may not realize the broad, unintended effects of their company's work. They'll assume the best of their coworkers that they've developed friendships with from working in the trenches, without inquiring into the larger effects they're having."

https://www.nemil.com/tdf/part1-employees.html

Would love feedback.


Sounds like nuclear weapons research in Tennessee during World War 2: only inform employees enough so that they can carry on the work without jeopardizing the mission (building nukes vs building a platform which owns people's information).


Telling people "just enough" was more of a security thing than a psychology thing.

X vs the Manhattan project isn't really a good test for the ethic of working on X.

You don't need to work hard to motivate people to start creating the atomic bomb when the Nazis are doing the same thing and a good chunk of your team fled the Nazis.

You don't need to work hard to motivate people to keep working on an atomic bomb when the alternative is a few million people being killed in the invasion of a bunch of islands that the inhabitants have pledged to defend to the death and thus far made good on their promise.

The Manhattan project is much less morally ambiguous than recent tech scandals (the words "recent" and "scandal" relative to the general population, people who follow tech have seen this stuff coming a mile away) because the cost of inaction in 1945 was so much higher than today. It's not like anyone was working hard to make Facebook IPO happen because they thought it would slightly reduce the chances of their relatives dying half way around the world.


Bookmarked for future reference. It's super important to remember that every company's behaviors reflect their incentive structure, not their mission statement.


I really like the succinctness of that. I wrote a lot of words, to say the same.


Facebook's making the distinction between allowing a device to offer Facebook-like services (which require Facebook functionality) and third-party apps that suck up all your friend data.

On the one hand, Facebook's got a point, that if you want to be able to use Facebook on a device without going through the Facebook app or the website the device needs to be able to authenticate onto some sort of API.

On the other hand, the NYT article makes the claim that the makers of the devices got access to the Facebook data, writing "Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers". However, Facebook never followed up on that in their article, just pointing out that if you are logged into Facebook on a BlackBerry that the BlackBerry can make the same requests you could if you were logged into Facebook through the web browser.

The question that matters which neither side addresses well is how much of that data makes it to device maker servers (for a while, the NYT homepage was claiming 'dozens' but they removed that and it doesn't appear to be substantiated in the article).

There are related worries with the device having access to the Facebook data itself, but at that point you need to start worrying about malicious activity by device makers in general. E.g. will my phone start sending my web history back to its maker as well? my bank account numbers?


The fact that "data made it to device maker servers" is a stupid, meaningless point. Of course it hit their servers. And it should hit their servers. It's idiots who don't know what they're writing about causing hysteria among idiot's who don't know what they're reading in order to sell advertising dollars.

And the whole time they're pointing at facebook as the corrupt ones.


I'm not sure it's meaningless. It seems to me that there is a distinction between [device manufacturer X is building their own copy of Facebook's graph by acquiring friend data] vs [device manufacturer X allows a user to login to Facebook and access friend data on their device without having to use the web browser].


Nobody was doing either of those things. Theres a whole plethora of innovation and social network competition and integration that happened in the 2000s. Google Friendfeed.


I can see no reasonable need for device manufacturers to have your facebook data (and that of your friends) stored on their servers.

Care to elaborate on why this data should be allowed to leave the phone? (and be allowed to be collected regardless of user's privacy settings)


How do you think Motoblur worked?


It seems they don't really disagree but are simply saying, its not as bad as you think. Is there anything factually incorrect in the New York Times reporting?


It's one of those articles that's technically true-ish so long as you redefine words to mean something very different from what people would expect them to, but is written in a way designed to misinform basically everyone who reads it.

For example, when they say "Facebook Gave Device Makers Deep Access to Data on Users and Friends", they mean Facebook let them write software that could be run by their users and give those users access to information their friends had shared. There's nothing technically untrue about this, but it gives a false impression about what information Facebook made available to who. It makes it sound like Facebook gave a big fat chunk of user data to device makers as a bribe to include Facebook on their devices, when in reality we're talking about giving their software the access it needs in order to actually provide access to Facebook in the first place.

I've seen a lot of very confused comments here and on Twitter as a result.


> It makes it sound like Facebook gave a big fat chunk of user data to device makers as a bribe to include Facebook on their devices, when in reality we're talking about giving their software the access it needs in order to actually provide access to Facebook in the first place.

These two things are not mutually exclusive. They effectively did both, even if the intention is unclear. You must have forgotten how bundled apps on OEM Windows worked.


I think the point is, data goes into those companies and in some cases gets stored on their servers and from there, who knows what happens.

Facebook makes it sounds like the user is in control.


NYT says: ‘Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.’

FB says: ‘Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.’

This is the only disagreement as I can see.


Well, this is a huge difference.

If FB is correct here, then the whole thing is a non-issue. Some API giving access to the data otherwise available through a web browser is a good thing.

On the other hand, if an API provides access to information that isn't accessible through a web browser (and doesn't show in the official FB app), then it's reasonable to loudly complain.


NYT said relationship status, religion, political leaning and upcoming events, among other data were shared.

FB saying "we didn't share photos" is trying to say "well, we didn't give away everything"


It seems like a huge issue though. And we need to know in what precise way did Facebook enforce this with third-parties and verify compliance. If they just put it as a line in a comtract, but left the ability to circumvent this open in the device integrated API, and then never did due diligence compliance checks, then Facebook’s defense is entirely disingenuous and it’s reasonable to view them as culpable.


Incredible, they are using the current controversy to represent their locking down of the API in a good light. Facebook didn't restrict their API because they were privacy-conscious. They restricted it so you could not build experiences they did not want:

- You cannot sync your address book contacts with facebook in order to get profile pictures (you used to be able to do this)

- You cannot write an alternative Facebook client (with a better timeline, no ads, ...)

- You cannot write a complete bridge to another social network (e.g. implement Federation)

- You cannot build a P2P (serverless) application over Facebook. E.g. a chat, or something to send a file to a friend on Facebook, or to initiate a TeamViewer-like session.

All of these are either explicity forbidden by policy, or have been closed by specific changed to their API.

To be honest, I don't care too much that people were able to scrape data I put up voluntarily. The German Facebook clone was called StudiVZ - Student's Directory. This sounds a lot like a telephone book, and that was the mindset and expectation I had when signing up to Facebook. Create and curate a profile for friends and friends-of-friends to see, and I didn't care much if others saw anything, because it was irrelevant to them. I mostly cared about meeting people - being found, and finding other people. In this light, I'm more concerned about data freedom than data protection. While the latter is important of course, it's unfortunate that the former is always forgotten.


A lot of people are missing the point, debating whether this was reasonable or not.

Mark got away with telling Congress ~"we don't share with third parties" and now they're saying Blackberry's not a third party.

If it's all okay, why didn't Mark come clean? and tell his Congressmen? He had a chance to explain this arguably-harmless behavior, but he chose to sidestep it. Why? Did he not understand the question?

It's fine that this data-sharing is maybe reasonable. It's not fine that Mark withheld this from Senators. This is exactly what they were asking about, and given the chance to explain, he chooses silence. He gets to avoid the public debate while the techies argue amongst themselves.


It seems like Mark would always have to answer 'yes' to that question, since accessing Facebook via the web means you share your data with Google(Chrome), Mozilla(Firefox), Microsoft(IE), etc..

(Also any OS/kernel manufacturers who get access to your data through your usage of the OS or TCP/IP stack).

Facebook is sharing your data with Blackberry in the same manner.

I don't even see a consent difference, since you need to explicitly consent to sharing your data by entering your Facebook user/password into the Blackberry UI app. Similar to how you write your user and password into Chrome, thus "sharing your data" with Google. It also doesn't seem like there's any evidence that the UI apps were intended to secretly collect and store data(Was "The Hub" an unknown feature?)

I don't believe the evidence presented here invalidates Mark's answer. His answer would have been meaningless if he had taken the definition of "third party" put forward in the NYT.


In this specific instance, the Times article might be overblown.

They specifically mention that they were able to use BlackBerry Hub with a reporter's account to query Facebook data. The article never states whether BB Hub connects to Facebook directly, or whether it receives data from a BlackBerry-operated service.

The latter case is clearly user-hostile. If BlackBerry (the company) can read user data and Facebook claims not to allow 3rd-party access, then that is bad, and it should be treated as a breach of the user's trust.

The former case is more complex. As a user, I care a great deal that I can access Facebook using my choice of browser, whether that's Chrome, Firefox or Edge. I shouldn't be limited to the top three either. Some users may prefer a browser that works with their screen readers, others may prefer the built-in browser in their smart TV, and others yet might prefer a unified messaging app, like BB Hub.

The distinction between what happens locally or in the cloud is often unclear, and it's not getting any better. Chrome on Android wants to accelerate mobile connections by routing them through a compressing proxy. I can get an extra-secure version of chrome from authentic8 to protect against malware, with the caveat that it runs in their datacenter.

I feel that the tech industry in general, and Facebook in particular are struggling to tell users what happens with their data. Sometimes it's because things actually are complicated, and sometimes just to hide obvious overreach. The obvious blowback: complaints, strict regulation and mistrust. As the people who build and run systems, we should strive to do better. Regain the trust lost by past mistakes, and get back to the point where one could realistically apply hanlon's razor to reports of user surveillance.


The author does not provide a link to the NYT article to which this is a response. Maybe it was just an oversight.

https://www.nytimes.com/interactive/2018/06/03/technology/fa...


This is a really interesting moment and is worth some introspection.

On one hand, Facebook is clearly correct: If FB makes an API, and a user gives an application (written by a third party and run on a fourth party's device) their username and password, then FB cannot be blamed for the application using the username, password, and API to retrieve private data. Indeed, that's the point.

On the other hand, appearances make it look like Facebook is hiding some things: why is this not a public API? What trust are you putting in these third parties, what are you giving them that not everyone would be trusted with?

But most of all, people are waking up to the vulnerability of their private data. They are realizing that some things they've been taking for granted for years are dangerously insecure. So we have users, such as reporters, suddenly realizing that their device has access to all the data you view on it. Any third party app you give your FB uname/pwd to has access to everything on your Facebook, and the only limitation is whatever their terms of service are. (So does any software that app runs on top of.) Coming to this realization, we see backlash not always correctly directed. It would make at least as much sense to call out those third parties rather than FB, and ask them to prove they do nothing nefarious with this trust.

Is it too optimistic to hope this will stir mainstream interest in free and open source software?


In this FB response, they seem not to disagree with any of what the NYTimes had to say. This is called justification, not disagreement.


Your real data has already been mishandled by anyone from banks to healthcare providers to governments. This mass hysteria about FB is just silly nuisance constantly getting blown out of proportions by turf wars between FB and traditional media Oh and this particular case FB is calling the nyt fake news


With all that inside knowledge you probably should blow the whistle.


whistle whistle


The fact that you can't read messages (without switching to 'desktop mode') on their mobile web app is such horse shit. Yuck.


For now you can still read and write messages on https://mbasic.facebook.com - fast, pure HTML, no JS.


Either that or you can go to the basic version at mbasic.facebook.com for an even poorer (though somewhat better in some senses, like having messages integrated) experience.


Well, it definitely beats having to navigate the desktop version on mobile. Thanks for the link!


> mbasic.facebook.com

Thanks!


imposing `messenger` on all of us because they know none of us would use it if we had a choice


I just graduated college and pretty much all of my friends and everyone I knew there used Messenger as default for text communication. There's plenty of other alternatives, but that's what people seem to prefer. It's a pretty decent app and in my opinion way better than the old built in FB messaging.

I'd prefer to use Signal and my family uses it to communicate but it's definitely not as nice as Messenger.


That's what the Messenger app is used for.


Well no... there's nothing technically stopping them from supporting messages on mobile.

This is what they do on mobile:

* Lie to you about how many messages you have

* Ask you to install an app to see those messages

* Upon installing the app, give them permission to mine your data


Yes, it's a tactic to get you to install that app so they can mine your calls, texts, and contacts.


Right, the Messenger app that significantly slows down my phone and decreases it's battery life.


Yes, that's part of the horse shit.


>These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences.

Two people you don't know made a deal about how to use your information without telling you. There is no reason to think they are going to keep that deal, no reason to think anyone is actually checking up on your information, and no reason to think either of them cares... and no way to know if anyone actually keeps the deal... and probabbly no recourse even if you did know someone broke the deal.


As a non-user of Facebook, I completely take their side with all of this.

1) If people don't want to read the fine print, whose fault is that? 2) How have we gotten to a point where we are abdicating our choice voluntary, and then acting begrudgingly toward the new owners when they misuse it?

I must apologize for my cynicism here, but we've been going around this mountain for a very long time now (circa 2013 IINM?). I'm getting tired of hearing how people are feeling violated due to their own actions.


Correct me if I'm wrong, but my impression is that this all comes down to one basic question:

Does a Facebook-approved "mobile experience" count as an official FB app or a third-party app? It seems to me that the FB post is trying to frame it as the former, and everyone who's upset is trying to frame it as the latter.

Is that what this entire disagreement is about? Because if it is, maybe it would help if we just focused on that question.


Did you consent to anything that Facebook approves when you consented to using Facebook? Did you consent to the transitive property?

How does Facebook approving of some obvious third party make the third party not a third party? Approval and third party status are orthogonal.


I didn't read/ watch the testimony before Congress. Did they flatly claim that "we no longer share private information with third parties"? Because if so, this post seems to confirm that that was a lie, even they give (arguably) good justifications here for why they share private information with some third parties.


This is the article that this article responds on: https://www.nytimes.com/interactive/2018/06/03/technology/fa...


This should be the top comment.


Perhaps Zuck isn't as bright as we all think? This reply is far too tech-heavy, and jargon-littered to be taken seriously as a reply to something as mainstream as the NYT.

Their PR problems aren't rooted in SV, yet that's who this is targeted to. It doesn't make (good brand) sense.


> Their PR problems aren't rooted in SV

Silicon Valley is the last place I'm consistently hearing full-throated defenses of Facebook. It makes sense to keep one's base in order.


That might be sure. But it's still not their bigger / biggest PR problem.


> * it's still not their bigger / biggest PR problem*

It's the only one that could matter to them. Congress flopped when Zuckerberg testified. They are clearly no present threat. And we haven't seen a wave of action from states' attorneys general. We have no evidence users are decamping. And by extension, the advertisers are staying.

The only weak point is in (a) recruitment and (b) political support from the tech community. The first can be solved with money. Fortunately, Facebook has pots of that stuff. The second relies on keeping the armies of defenders, who call every Congressional office on their own accord on a strikingly-regular basis, working.

For an example of what happens when one loses their base, look at Uber. It went from teflon to pariah virtually overnight.


> We have no evidence users are decamping.

Last I saw (on HN) was teens usage is down.

As for Congress...do you trust them not to loop around again?

The fact that FB believes SV is all they have to focus on is what created this mess in the first place, yes? Nuff said.


> Last I saw (on HN) was teens usage is down.

Teen usage has been dropping for a while, though, with corresponding rises in Twitter, Instagram, and Snapchat. Are there numbers saying teen usage reacted to these stories at all?


If not SV, then where?

I could buy "Congress" or "the European Parliament", but outside of those answers I don't see it. Average users don't appear to be rejecting Facebook to a meaningful degree, no matter how much news outlets beat the privacy drum. But the Valley is small enough that FB could get stuck paying extra for engineers or even losing domain experts.

More broadly speaking: consumer boycotts don't work (directly), but supplier boycotts sometimes do. That includes labor.


the only jargon in this is API?


Security related issues need some time to investigate and to produce a proper report. This post seems like a knee jerk reaction. Actually, while this sounds like more PR fluff in a poor attempt to stem more scrutiny, I'm confident that there will be more reports from other sources providing a stronger case against Facebook's claim on this topic. This post makes it clear that Facebook is clueless if there are any weaknesses or breaches (despite shutting down 22 partnerships).


The New York Times should be ashamed of itself. Facebook let companies like Apple integrate with their platform.

To do so, these companies sent the same sequence of bytes from your mobile phone to the Facebook server, as the Facebook app does, or as any person can do. I can write my own Facebook app today, and there is nothing that Facebook can do about it, except sue me.

THAT IS A GOOD THING. LITERALLY FIVE MINUTES AGO THE COMMUNITY WAS FIGHTING IN THE COURTS FOR PACKAGES SENT OVER THE INTERNET TO NOT BE CRIMINALIZED.

Remember the whole thing about violating the terms of services of a company which forbids scraping making you a criminal hacker?

The only thing Facebook said to Apple is: Let's make a deal, we will not sue you, you put our logo into your phone, also we promise not to break your app.

No data was given to anyone! This is literally my iPhone/Samsung/Blackberry running an app that gives ME access to MY Facebook data.

It doesn't even go to Blackberry's server! The nerve of people to pretend as if the data ON MY PHONE is someone in the hands of a third party, as if my phone really belongs to the manufacturer. Again, we used to be fighting for the idea that these devices should belong be unlocked, should be under our control. Now you guys pretend that data my phone downloads from Facebook is somehow a violation because I decided to use an app that someone else wrote.

There is no possible universe in which that is bad. Think about the ramifications of these new ethics that people suggest here.


The NYT article says that Facebook told them that some partners stored that data on their servers. I note that this Facebook rebuttal does not refute that point in regard to the device partners. (Which could be a possible confusion or misleading quote in the NYT article, conflating all partners with the device integration partners, so I'd like it to be clarified somehow)


Sure. I still think that is fair enough, in particular since there are actual contracts involved here.

However, as you say, it is not clear at all. If that is the problem, the New York Times should write an article about that, not 5000 words of all kinds of insinuations, with the goal, it would seem, to generate the maximum amount of confusion, least amount of education, and thus the most amount of outrage.


Forgive what may be a not-fully-informed question/opinion, but isn't the issue that 3rd parties were able to see private information? Not public information?

Like you send the same sequence of bytes that they do, but the difference is that it should be different depending on who is sending the bytes, no?


The app that was written by Apple, or by Blackberry, had to ask you for your password first (or possible use a different kind of authentication were you had to allow access). Without that, they could not send the "right bytes", and thus, the Facebook server would not send back any private information.

In other words, it worked exactly like the Facebook app written by Facebook itself.


Developer communities which pride themselves on their reason would rather not look at paltry things like details when a company they don't like is being criticized.


"Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends."

Hasn't FB had a history of assuming the decision was positive if the user didn't opt out, through a difficult procedure?


Oh I am so sad, Facebook is “victim” of lies and poor reporting, let me get my tiny violin.

Facebook lies every possible time, it’s built into their perverse business model. Even if the article was actually wrong, it’s only fair that they get a taste of their own medicine every once in a while.


It is, of course, useful to remember that "These partners signed agreements that prevented people’s Facebook information from being used for any other purpose" was ostensibly the back-stop against the mass-harvesting undertaken by Cambridge Analytica, also.


I feel the response is similar to Apple's antenna-gate. Using competitors along to prove their point. Could have communicated better. Good that they took steps couple of months ago to close their legacy APIs. That's the only key takeaway from the post.


> It’s hard to remember now but back then there were no app stores.

Bunch of nonsense. Of course there were.


Yep - I recall that the first iOS version that pre-integrated FB I recall specifically writing an email to a friend of mine that worked at Apple to blast Apple for including it - when there was a perfectly viable option for them to use the app-store. They didn't reply to that email, but we remained friends. That was about 8 years ago.


What they should have said was, "Remember when our official app was garbage and you needed to rely on 3rd party apps to have a passable mobile FB experience?"


A bit off-topic, but it bugs me how the picture in the article shows a Blackberry Bold running the BB10 OS, which never supported such device. I wonder why they showed a (probably) photoshopped device instead of a real one.


> All these partnerships were built on a common interest — the desire for people to be able to use Facebook whatever their device or operating system.

Facebooks best interest, not 'common interest'


No, the user's interest. Users were logging into these devices, presumably. They wanted to access their Facebook account on them, Facebook and the device manufacturer figured out how to do it. It sounds like everything is working as intended, there.


Building on FB is completely different than data-mining FB.


> These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences.

That is a big door and a pretty open use case. I am not buying into "and this is actually good for the users" story.

To me, FB must clearly choose how to handle this: (A) "we made a mistake, sorry; we will fix it" or (B) "this is working as designed; if you do not like it, go away". They could probably justify either case both internally and externally (better ethics vs better revenue), but trying to stand in the middle as they have often done in the past will likely backfire. Buy more popcorn. My 2c.


Who in their right mind would trust Facebook over the New York Times? Try again Facebook. I’m kind of glad they are doing this, I think it will speed up their decline in the end.


"We Disagree": the last refuge of a company's defense when they have absolutely nothing else to counter or rebuke their opponent's argument.


It seems like a lot of Facebook employees are commenting on this thread. I wonder if they have tools to influence the public argument similar to the Russians.


I wouldn't be worried about BlackBerry, it's Samsung. I will never use their phones again with their fingers over every part of the Android pie.


I don't have high expectations for a company that says they take privacy seriously while constantly invading it and playing dumb.


People don't read terms of service. In using your phone who knows what you agreed to let your phone company do with your FB data.


So will Facebook take legal action against these companies who violated their service agreement?


Imagine if this same thing happened with Microsoft and imagike people’s reactions.


lol where are these people back then when Facebook released the APIs? I bet even if they were asked to give consent, they would not be able to foresee what's happening today.


Their argument here is basically like being accused of murder and using "since murder is illegal, we did not murder someone" as a defense.


> This took a lot of time — and Facebook was not able to get to everyone.

Spoken like the true Anti-Privacy Overlords they are. :(


writing this was a mistake. this is clearly a deceptive pr spin fluff that is entirely non-responsive to the specific concerns raised in the new york times article.


It reads like a long defensive argument, "how else did you expect us to make huge sums of money"? It is exactly as tone-deaf and legalese as you would expect. There is no new information here; Facebook says legal contracts actually protect your data so it can't go anywhere, "just trust us", etc.

> These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences.

That sentence does not make any sense. Signed agreements do not prevent your information from being used in other ways. That's insane, literally.

> Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.

In the past, Facebook Legal has stated that once a Facebook user has signed up, they consent to having psychological experiments performed on them with no further notice or direct consent. Facebook has acted on this and intentionally made hundreds of thousands of people fall into a depression, just to see if they could, and then they bragged about it. The sentence quoted above actually means "friends’ information, like photos, was only accessible [whenever and however we wanted to]" as Facebook considers those people to have already "made a decision to share their information with those friends" when they signed up.


> Facebook has acted on this and intentionally made hundreds of thousands of people fall into a depression, just to see if they could, and then they bragged about it.

No, they did not. They adjusted the proportion of positive emotional expressions and negative emotional expressions in users' news feeds in order to test for an association between that proportion and emotional expressions in users' posts. There are serious problems with what they did, but "intentionally made hundreds of thousands of people fall into a depression" is late-Slashdot style trolling.

http://www.pnas.org/content/pnas/111/24/8788.full.pdf


unethical science does not deserve to be wrapped in its journalistic language in an effort to detach it from the harm it caused, it deserves to be called out for what it actually is, in this case intentional psychological manipulation of people's moods without their knowledge or consent.

One does not grasp the true harm caused by unethical experiments by reading only the documented (published or non-published) scientific results. Two extreme examples which I am by no means comparing to that of Facebook, but rather as canonical examples of unethical science, include the Tuskegee experiments and Nazi experimentation. Reading only the published results of these experiments does not convey the horrific harm that actually occurred. The same principle applies to referring towards Facebook's scientific data as an argument that these experiements were not unethical. The scientific results only serve to shroud and conceal the actual activity which occurred in the production of these results.

(edited for clarity)


How do you distinguish between unethical science and A/B testing, or even just 'We changed our product, then didn't like the results, and changed it back?'


There is at least a dense 50 years of research and writing on the topic of scientific ethics, it's a bit naive to pop into a thread and ask, "so what's the deeeeaaaal with human experimentation?"


To be clear, I think the question is where the line is between 'experimentation' and 'just doing what people do normally'.

Concrete example: I am in the process of trying to hire people right now. I'm not sure what the exact best way to do that is, so I vary how I interview over time and see how well it seems to work. I record what happens in a spreadsheet, and then later on I look at the 'data' to make some judgments.

Am I experimenting on my interviewees? Am I potentially harming them?

Well certainly if I ask question A in one interview, and question A' in another, and question A' is harder and I don't pass that person, then one might be tempted to argue that maybe that person would've done better on question A; therefore, I've harmed someone by 'experimenting' and making it harder for them to get a job. Rejecting someone can certainly make them depressed.

In principle this is no different from A/B testing an interview process, using real live humans no less.

So do I personally have an ethical obligation to disclose to everyone I interview that I am conducting human experimentation on them? Or is the only difference in power and scale—I am just one person, but Facebook can A/B test on millions of people at once?


> Well certainly if I ask question A in one interview, and question A' in another, and question A' is harder and I don't pass that person, then one might be tempted to argue that maybe that person would've done better on question A; therefore, I've harmed someone by 'experimenting' and making it harder for them to get a job. Rejecting someone can certainly make them depressed.

> In principle this is no different from A/B testing an interview process, using real live humans no less.

> So do I personally have an ethical obligation to disclose to everyone I interview that I am conducting human experimentation on them? Or is the only difference in power and scale—I am just one person, but Facebook can A/B test on millions of people at once?

Generally, when interviewing, we ask the same questions of each applicant. We adjust the questions from A to A' as positions are hired, and pools are assembled, but for a specific job, each candidate gets the same questions.

It would have to be different for open recruitment not tied to a specific job, but I haven't had to do that.


There is plenty of gray area in human protections law. Source: I'm a researcher who spent most of last week dealing with lawyers in multiple time zones on exactly how to write a bit of protocol in order to preserve the grey area (so as to avoid setting unnecessary precedent), while honoring the more stringent interpretation (to avoid even the appearance of skating onto thin ice).

Four years ago, having talked to some of the same lawyers about the same topic, none of us had any thought that it was a grey area. An a priori interpretation of the law was pretty clear: a more liberal position was (and is) perfectly legal. But the most anxious minds tend to prevail in these matters.


I enthusiastically support requiring Facebook to have an IRB greenlight any user impacting changes they want to make.

That's a terrific idea.

https://en.wikipedia.org/wiki/Institutional_review_board


There is plenty of gray area in human protections law.

Of course, which is why the conversation continues. If it was settled at the beginning without nuance, it would be defined in black and white.

I'm not sure how your reply relates, though, since I was merely imploring GP to demonstrate any awareness of the state of research rather than jumping in with a lazy question-comment.

Elsewhere this is expressed by the slogan, "I'm 12 years old and what is this?"


Well if your study involves depressed people unknowingly being manipulated with negative newsfeed items that they didn't know they consented to being subjected to in a secret study, then that's unethical. The scientists aren't stupid, they knew beforehand that in a sufficiently sized population, there will be a segment of depressed people that they are influencing to commit suicide by being subjected to increased negative newsfeed material. That is unethical to me any way you cut it.


I would assume a core tenet of ethics and deciding if something is unethical is if it has the potential to harm people. Inducing depression to indiscriminate masses is pretty high on the list. It's not valid to compare the emotional impact of Facebook on millions of its addicted users to something like the color of a box of laundry detergent.


Selling cake has the potential to harm people, a moral framework needs a bit more subtlety than that to be sensible.


Well, including laxative to judge how much people can tolerate before they get the shits isn't far off the mark from inducing depression.

FB - A little negative = meh

FB - More negative = major downer

FB - Totally negative = woah, look at the sorry state they are in now.

Cake - A little laxative = mmm this cake is special somehow

Cake - More laxative = nice cake, special, where is the roll of extra TP?

Cake - Dump it in = people home for days, the shits.

In both cases, not telling them is the ethics problem.

Basically the public trust boils down to people expecting others not to harm them. Depression can become chronic. The shits could damage someone requiring medical help.

Both are clear risks people would very likely avoid, if they knew.

FB - We want to run a depression test. Plz volunteer your feed and see if you get depressed.

Cake - We have a laxative that tastes great, but you might get the shits, plz have some cake.

See the problem?


do folks "sell cake" to see if it worsens people's depression? this is more like selling cake that secretly has nuts in it which are known to cause allergic reactions without disclosing it.


Well, it's more like if the nuts had an unknown reaction, and the idea was to figure out what the reaction was by slipping it into some cake, and then observing.


which would be massively unethical if not illegal to do as a food manufacturer


I think one good discriminant is: Would you want to explain it candidly?

E.g.: "Starting next week, we'll try to affect your mood as part of an experiment". Or "the new version of our product purposefully tries to depress you a bit" followed by "we didn't like the results of it so we've changed it back".


The terms don't apply to the same context. A/B testing is a specific technique (well, not that specific) which can be used to do ethical science or unethical science.


Medical trials are also just A/B testing. waves hands


true, but most of them have opt in. . .. Like people know they are in the test even if they don't know if they are the control or not.


That was my point, apparently that was not obvious at all. That is what I meant by the "hand waving" part. Being apologetic about FB just because what they did was some kind of A/B testing does not redeem them at all.


I disagree. If you Google for primary literature related to Tuskegee, you will find, front-and-center, publications describing the ethical disaster that it was. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2609060/ https://dash.harvard.edu/bitstream/handle/1/3372911/brandt_r... One does indeed grasp the horror.

And, where did the Nazis publish, again?

We certainly need people to responsibly interpret peer-reviewed literature for lay audiences. Responsibly.


>If you Google for primary literature related to Tuskegee, you will find, front-and-center, publications describing the ethical disaster that it was.

Your citations were published ~50 years after the Tuskegee exeripments. A proper analogy would be to compare your citations to the ethical condemntations of Facebook made 50 years from now.

>And, where did the Nazis publish, again?

The Nazis published in all of the front-and-center, mainstream publications after they were brought to the United States under Operation Paperclip (among other operations).

https://en.wikipedia.org/wiki/Operation_Paperclip


> A proper analogy would be to compare your citations to the ethical condemntations of Facebook made 50 years from now.

When the findings are published has no bearing on the ethics of the experiments. Ironically, the half century lead time does not impinge on the impact, as much as the chronology strengthens the impact.


> And, where did the Nazis publish, again?

I can't answer that as it is generally agreed that this data should not be readily available, however Nazi experimentation was cited for decades, and much has been written over the ethics of this: http://www.jewishvirtuallibrary.org/the-ethics-of-using-medi...


[flagged]


It's not a 'Godwin' when it's literally the reason for IRBs and modern scientific human research ethics.


I did not compare Facebook's experiments to those of Nazis. I used Nazi experimentation as a canonical example of unethical science.

Since it is likely unavoidable that referring to Nazis will lead to accusations of Godwining, I have edited my above post to hopefully make this distinction extremely clear.


Don't feel bad about it, people parroting any mention of nazis as "godwining" (and almost always trying to use it as "you lose the arguement!!!", which wasnt even godwin's observation) are terribly short sighted.

If you refuse to even discuss the nazis and what they did, especially in directly applicable cases like this, how the hell can we ever hope to not repeat their mistakes.


> late-Slashdot style trolling

Off topic but that ^^^ is exactly why, after 20 years, I finally gave up on Slashdot and then stumbled across HN and couldn't be happier. I have, in recent times, gone back for a visit just to see what it's like and... realised my decision to leave Slashdot was made probably 10 years later than it should have been.


Similar to how they "prevented" apps from abusing their API through policy, while simultaneously doing nothing with their construction of the API to prevent apps from breaking the policy. Apps are prohibited from filling in content that (by policy) must come only from the user, but oh we're going to give you all the tools and capabilities required to march right on ahead and do what we agreed you wouldn't do. Then proceed to inconsistently and incompletely enforce policy only when complaints are registered.


Regulations forbid us from letting you access the filing cabinet but I can tell you, all of us will be out having a beer between 6 and 8 PM. Oh, and the key is under the flowerpot. Wink wink.


“But the terms of service were on display...”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well, the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice, didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'.”


But that analogy is letting someone access the filing cabinet, and not taking reasonable precautions not to. Which would be a direct violation. Which may or may not be the case here.


> Signed agreements do not prevent your information from being used in other ways.

Breach of contract opens you up to civil liability. Having this agreement creates disincentives that wouldn't otherwise exist.

"Prevent" doesn't mean to make something physically impossible or remove the ability to choose. Police can prevent drunken driving by announcing they will have DUI checkpoints on New Year's Eve. It doesn't make it physically impossible to drink and drive, but if people choose not to because they want to avoid the consequences, some drunk driving has been prevented.


Who is liable to whom when user A use a free service B based on an ambiguous agreement, and that service makes an ambiguous agreement with another service C, and C does something that is arguable a violation of one of those agreements?

How much restitution am I entitled to for an ambiguous violation of an amorphous concept like privacy? And how much is Facebook entitled to from a partner who (might have) violated that agreement, causing non-measurable harm to me?


I worked at an F50 corp that built a special FB app for our platform. I worked on the project itself.

This is getting a little conspiratorial: contracts are very real, and though I didn't see ours and was not privy to any special user data we might have received, we are very careful about that kind of stuff and liability is a huge, huge deal.

A contract stipulating that data has to be protected in a certain manner is a reasonable protection, depending on the sensitivity of the data.


That's pretty straight and forward, facebook has an obligation to hold up their end of the contract including the partners that license the data. If I have a contract with them, they should be liable for that, if they want to turn around and take it out of the 3rd party's hide with a breach of contract litigation they should do that but it's not the uses responsibility.

It's very strange that you'd throw your hands up and say, 'well FACEBOOK didn't do it so you're out of luck.' Facebook did do it by not regulating there third party agreements.


> Police can prevent drunken driving by announcing they will have DUI checkpoints on New Year's Eve.

Well, it’s only prevention if it succeeds.... otherwise it’s just a failed attempt.


>Signed agreements do not prevent your information from being used in other ways. That's insane, literally.

The real insanity is with the users. FB did the most obvious thing. The user-base somehow magically thought they were providing all those free services because they were nice guys.

It was obvious to anyone since the beginning that FB was a clearinghouse for private data trading. How else could the model remotely work?


> It was obvious to anyone since the beginning that FB was a clearinghouse for private data trading.

I tire of this argument that users should obviously know they are trading their personal information in return for access to a service. They don't know this. It's obvious in a conversation with any (even lightly) tech-illiterate person that nothing about how the modern internet economy works is obvious.

Think back to the emergence of the web as a truly popular medium. There was no Google Analytics, no FB tracking buttons that follow you around on every web site you visit (that one is particularly egregious - FB users are tracked even when they aren't on facebook.com, and we expect users to just know this?), just advertisers buying a banner ad slot from the owner of a web site. Back then social networking was, what, AIM? That was free and they didn't harvest user info for it. The change has been gradual, and the idea that users should have kept up with every development that led us to where we are today is preposterous.

When this has happened in the past the answer has been clear: knowledgable people come together to pass laws that benefit individual citizens who have neither the knowledge nor time to learn.


Exactly that.

I consider myself relatively tech savvy, but when - thanks to the GDPR - that hive of scum and tracking that underpins the modern internet was revealed I was really shocked.

Clicking one of those "manage your cookies" links is a truly enlightening experience.

How should Average Q User even begin to grasp the implications of this fucking datasucking hydra?

I was glad having "deleted" my Facebook account some 4 years ago.

Man, I didn't no 10% of the shit that's really going on.


As a fellow geek, of course I was aware of the potential abuse.

Being party to conversations with C-level sociopaths is what really brings it home.

One example: One employer railed at the injustice (loss of profit) of not being able to share the millions of patient's medical records we'd accumulated with Big Pharma's marketing machine. This employer spent considerable effort trying to figure out how to work around the law (eg plausibly deniable anonymization).


add to this also the target audience.

FB may have started out as an invite only thing for technically savvy(ish) college kids, but those days were completely over the minute they got their app so tightly integrated with iOS and Android. So ... circa 2010 maybe?

FBs target audience now is literally the same as or even wider than television. We don't expect granny to understand the economics of her cable box.

She's used to paying the cable bill, seeing a few ads and feeling free to use whatever is on offer. Because she paid for it.

For instance the guide channel is not "free" from her perspective, and she has no expectation that she should need to be suspicious that for insance, the guide channel is tracking her viewing habits and customizing the programming on offer so as to massively sway public opinion and elections.

Average US consumers are used to this model, whether or not the fine print spells out something else. They're going to think "I paid my cell phone bill, therefore I am paying for facebook" ... the question never even arises.


While I completely agree with your comment, it also omits something that just serves to underscore your point that consumers make assumptions about how things (economically) work.

> They're going to think "I paid my cell phone bill, therefore I am paying for facebook" ... the question never even arises.

Your example switches from "I pay <somebody> that enables access to <somebody else>" and you don't even have to go that far.

The guide channel is not tracking her viewing habits, but her cable box certainly is. The industry term is Addressable Media[1], and if she has any modern cable box from any major provider then she's being tracked and targeted for targeted ads just as highly as Facebook would be, despite her paying her cable company for the privilege of watching it. And while it's not customizing the programming itself that she watches, it is customizing the ads she's seeing. It's effectively the same as Facebook, where the she's using whatever is on offer and being subtly influenced by highly targeted excerpts being interspersed into it.

Ever experience weird glitches for commercials that start and end almost immediately, or randomly lack sound or any number of weird quirks that would make you think "man, someone just spent a lot of money for a messed up commercial spot, how did that get passed QA/QC?!"... chances are that was just a hiccup for you specifically as your cable box dynamically inserted an ad, and not hardcoded as part of the wider broadcast.

Cell phones are the same. Even though you're paying the cell company for network access, they're also double dipping by selling targeted audience capabilities based on their data of you based on ads that are seen while using their network.

[1] https://martechtoday.com/addressable-tv-state-cross-device-a...


Most of the people I've talked to seemed to think that ads were how money was made from services like that. Promoted content that's constantly in their face trying to blend into normal content.

In fact, so much so that it's been hard to get people to realize that the recent issues haven't been related to ads at all. Facebook itself stills seems to be trying to blur the lines between data collection and advertisement.

To me there's nothing wrong with ads. Even content-targeted ads (if I'm reading about gardening then show me an ad for potting soil or gloves). But the push towards creating profiles that follow you around to target ads to you on multiple platforms based on your previous behavior is just creepy and dangerous (when political ads and control over the data comes into play).


ads (if I'm reading about gardening then show me an ad for potting soil or gloves)

That’s what genuinely baffles me. If I were a manager of a gardening company and I wanted to do targeted advertising I would just buy space in gardening themed publications. The idea of showing people my ads while they’re on other websites and not in a gardening mood anyway makes literally no sense. All this tracking and profiling ads zero value for the ad buyers, so why are they paying for it?


You do that because its not an either/or decision, and the bidding nature of digital advertising means the lower effectiveness will be reflected in what you're paying. You can be paying a premium for people searching to buy gardening gloves, a little bit for people reading gardening stuff, and then a pittance for people that may be related to gardening in some fashion.

You're extrapolating based on your own actions rather than measuring the result. The results say that if you show gardening gloves to enough people at random, you'll eventually get some sales. Everything else is just narrowing down "random" a little bit.


Because it does add value.

Brand awareness/clout. A reminder that you need or want that thing.


You can do that kind of advertising without collecting this much data and creating psychological profiles on every user (think: Coca-Cola).

And even if I like something seeing it repeatedly online just annoys me more than it inspires me to buy their stuff. I think they'd get more bang for their buck by having social media personalities that align with their target market push their product (think: Nike).

If people are likely to be consumers of those things then they'll probably digest media related to those things or other media in the same demographic. This is classic marketing and it doesn't require the level of privacy abuse that Facebook has been trying to justify.


Agreed.


I suppose that's true. Before Facebook I basically just talked to people over AIM and email lists. I remember setting my status on AIM so people would know how things were going. Initially creating a Facebook account did feel like just filling out your user profile in your AIM account so your friends can find information about you.

Perhaps someone was also mining the AIM data, but that would never have occurred to me to do. I doubt the tools to analyze and monetize it existed yet in any case. I'm not even sure the tools existed to monetize a social network when Facebook started.

It's a kind of weird conceptual leap to be honest, when compared to thinking about just selling ad space or perhaps targeting it based on people's interests.


On a similar to theme to the ever present facebook button, I wonder how much tracking Google is able to do through Google fonts, an absolute ton of sites use Google's CDN to deliver fonts to your browser.


Sure, but do we just absolve people of all individual responsibility to think even a little bit and make educated choices? I think there is more evidence that people just don't really care about this issue. After all of this came out, FB use didn't really go down.


I think it's absurdly unrealistic to expect people to do in-depth due diligence on all of the services they use. Modern society requires us to trust innumerable service providers and their dependencies to enable so much of the convenience we take for granted. What little knowledge we do have about abuse comes from the accountability of these companies to regulators like the FTC and FCC.

Realistically, I think we have three choices:

- We can't have nice things.

- Rampant exploitation of individuals, due to vast asymmetry of information.

- Regulation, with its costs and inefficiencies.

But I think there's very little precedent of real accountability deriving from collective consumer action, even in cases of overt abuse (think Wells Fargo).


I think another thing that bothers me about this mindset is that we have started viewing "consumer" action as the only valid response, which is really convenient for some people for the reasons you mention.

But in a society where citizens are represented by elected officials, regulations are "citizen" action. Regulations should not be viewed as any less legitimate than consumer action, they are just enacted through a proxy mechanism, aka elected officials.

This whole bifurcation of regulation vs consumer boycott, and the subsequent push to delegitimize regulation seems pretty artificial, and more importantly, a huge benefit to people looking to avoid any sort of boundaries on their actions.


I disagree completely, honestly. If you are signing up for a service like Facebook and they are asking you to click a bunch of boxes, it's not unrealistic to expect you to

a) either understand what you are clicking on or b) just refuse to click on it

what is unrealistic is to expect society to babysit you every single time a moderately complex choice is presented to you.


If you were to actually sit and read those agreements for everything you use, more than half of all productive time for humanity would be reading licensing agreements. You can refuse to click on these things,but these companies continue to try to insert themselves into more and more of society until we get to the point that you can either chose to spend all day confirming companies aren't taking advantage of you, or checking out of society.

This is a tragedy of the commons that government regulation has been the best solution for so far


But surely all the evidence suggests that it is unrealistic, given that reality shows us over and over that people don't read these boxes before checking?

Do you read the entirety of every twenty page EULA before using a new app?


Actually, I do(almost always). But, that leads to it's own set of issues. My banking session timed out before I finished reading it. Another service I've been using for over 10 years just added new legal that I don't agree to... now what? That service has a network effect... so, I have to leave? drat! Truth is, most online services ask for more than I am willing to concede. That is an issue.


Not to mention the services that alter their TOS and "notify" you by updating a web page. No email, no summary of changes, just a quiet update to a 20 page document that you automatically agree to by continuing to log in.


All sensible choices last only till antagnositc action occurs.

In the case of legal documents -

1) the base scenario itself is terrible - attempts to make credit card terms easier to read have resulted in a huge increase in the amount of text required to read it.

2) leaving the base case aside - the moment a company or individual decides that they can get away with preying on customers, sensible options no longer work- no normal person is going to beat the legal team.

Your position is a theoretically sound position but it does not survive contact with very common real world scenarios.


you have to evaluate this position in the context of comparing it to the alternatives. Do you propose government regulating every moderately complex activity because people can't understand it? First of all, people are smarter than you give them credit for and secondly, Soviet Union tried this. It just doesn't work.


The Soviet Union didn’t live in the modern era - and the country which is comparable today is China - and they’re doing very well. I’ve heard many credible claims that the Chinese state will fail, but it hasn’t till date.

So that point may need to be re-thought.


Doing what very well? You are making too handwavy of a statement. they are regulating some things they care about very heavily and others not at all. And very well is all relative. China is a poor country still, comparing it to US is not reasonable at this stage


> After all of this came out, FB use didn't really go down.

It's both too early and hard to say that for sure. I would suspect it's had an impact on FB's long term trend line, especially with younger people.

Anecdotally, much of my social network has significantly decreased their Facebook use if status updates are anything to go by. They may be on the site as much as before, but they're not really engaging with it as actively as they used to.

Granted, that could just be a normal fall-off since the 2016 Presidential Election season was a "special" time. But I've noticed even with my cousins abroad they've mostly transitioned fully to WhatsApp. Sure, that is also a Facebook joint, but it sends a real signal to Facebook as to what the market values in a social media platform, and it seems it's not the Facebook model.


> But I've noticed even with my cousins abroad they've mostly transitioned fully to WhatsApp.

I've managed to get my friends and family over to other channel so we are now almost 100% Facebook free except for a few log in with Facebook and the occasional Instagram (also declining I think).

(Disclaimer: I trust WhatsApps crypto since a number of cryptographers have audited it but I still do not want them to have my metadata. I mean, seriously: the crypto can be unbreakable but why do we think Facebook bought WhatsApp and made it free?)


I've been trying to do the same. Most of the communication has moved to WhatsApp and the "here's what's going on in my life" blasts have moved to Twitter and Instagram. It seems like people have reverted back to using plain old email (or Evite) for event invitations (thank God for the former ehhh on the latter).

The big impediment left has been Facebook's function as a social space to share photos with all and sundry. If not for it being the place to post photos of kids, weddings, vacations, etc. I don't think people would be spending much time on it. Seeing pictures of their nephews, nieces, kids, and grandkids on there is definitely what keeps the old people plugged in.

There really isn't another service that fills the niche either. Instagram is geared towards individual photos rather than albums or chronicles of events. Flickr kind of did, but it's basically dead now and its narrow focus on photography alone wound up gearing it towards pro or hobbyist photographers and didn't get much buy in from everyday users.

Facebook also seems to be something of a default space to host a bulletin board or community forum. Next-door has been trying to muscle in on that territory but it's been plagued by problems with racist usage patterns. Those would probably go away if it was more common, but it's created some real issues with optics. Also, I have no idea what their security practices are like so who knows if it's an improvement.


Google Photos isn't bad for making and sharing albums. But then again you're just trading one giant for another.


>It was obvious to anyone since the beginning that FB was a clearinghouse for private data trading. How else could the model remotely work?

This is just hindsight talking. It was hardly obvious from the beginning because it was hardly obvious how big the market for granular private data was going to be.

Facebook, in the beginning, was functionally just a stripped down personal page with a status update feature analogous to AIM. It didn't really ask for any information that you wouldn't have gleaned from a 10 minute conversation with a person and almost everything it got about you had to be volunteered (e.g. favorite music, movies, etc.)

The News Feed didn't get introduced until a few years in and it prompted massive outcry from Facebook users for how invasive it seemed, but even then most people assumed the problem was going to be that it violated some implicit social consensus where people should have to go looking for information about you, not have it sent to them in an notification blast. In other words: "stalking should be hard."

The ad-supported business model at the time didn't rely so heavily on micro-targeting by "revealed preferences," it was assumed they would target based on the declared preferences that you gave them (e.g. favorite movies, music, etc.) The idea that Facebook (or even data analytics as a field) would eventually become sophisticated enough to devour the news media in general and tailor you a bespoke reality based on your implied tendencies and personal weaknesses to manipulate you was, at best, fodder for some speculative cyberpunk fiction in the early days. In fact, Facebook didn't really even make forays into being a news clearinghouse/media aggregator until the mid 2000s when "going viral" became the hot trend in media, which is what ultimately led to the media world putting themselves over a barrel to social media companies by myopically chasing traffic in lieu of building an audience.


I for one should have realized what it was more or less immediately, but as a young kid in college, didn't. I don't claim that it was particularly hard to see at that point as it was well into the mid 2000s, but I wasn't particularly tech savvy or worldly yet.

By the time I deleted my facebook, no one even saw my goodbye post on their "feed." It was a different internet than the one on which I naively signed up. I, and a host of others, enabled that new internet. I have much less interest in this one.


Dude that’s BS.

The day FB was announced was the day I said “this is a bad idea.”

I and many others on HN have NEVER made a FB account, and my life seems to generally have been better for it.

But it was obvious then, and it is obvious now.

Matter of fact it frankly looks even worse today, since there really seems to be no solution, and the granularity of tools and regulation is too coarse to deal with this scenario.

The only real option I’ve seen seems to be to drop off the internet.


>The day FB was announced was the day I said “this is a bad idea.”

The "day FB was announced" it was exclusive to Harvard students and was just a cleaner version of MySpace and Friendster. It's unlikely you would have had strong opinions about Facebook, in particular, that you didn't also extend to those two, as well as AOL and sites like Digg.


To ding me for using the term announced is to find issue in semantics. My position is the same - I had the chance to be part of FB at some of the earliest stages and thought it was a bad idea as did many others.

And people clearly saw and pointed out the issues with privacy back then.

I had few issues with Aol, but it was still a simple service play - and had/has little at all in common with Facebook.

DIGG was never at the same scale or range - and from what I know it never depended on your real life profile as I recall.


>To ding me for using the term announced is to find issue in semantics.

Not pure semantics. It is important to focus on what you're actually talking about when you say "the beginning." Facebook has evolved over time, both in its service as well as its business model. So when you talk about "the beginning" it's important to know the beginning of what?

>DIGG was never at the same scale or range - and from what I know it never depended on your real life profile as I recall.

Nothing was ever at the same scale or range as Facebook. If they were, they'd have been as much of a threat as Facebook is now. That's kind of the point. A lot of the LiveJournal, Xanga, MySpace, etc. stuff from back then was all groping towards a functioning business model and it was Facebook's news feed that actually created one.

But even in the early stages the News Feed wasn't really about extracting data, just about creating a UI paradigm that made it easy to shove native advertising in your face. What really made the big data analytics game take off was when SEO and viral marketing started getting big, which wasn't really on Facebook's radar in the early days.

Advertising is different from Ad Tech. Ad Tech is about more than just Facebook's collection. People couldn't really have predicted the extent to which the media industry (new and old) was going to go whole hog into Facebook's platform. It has been their complicity that elevates Facebook's data collection from being "Wow The Facebook is nosy and annoying" to "Wow Facebook is a threat to the public sphere."


It’s irrelevant whether they had a functional business model or not - the issues with this system were known. And Facebook isn’t the only threat in the world.

It was obviously privacy invasive and the harm type any of those systems could do was obvious. They didn’t have a business model when they started out - but that’s only because they had the runway to ignore it. Otherwise there was an obvious way they were going to make money and that was Ads. Powered by data harvested about you.

I mean that’s the whole point of the site? What else is it going to do?

This is facebooks business model - it’s also the business model for many other sites. And yes those other sites are also a problem.

Hey, I’m not stating an opinion, I’m stating a fact.

I don’t have a Facebook account and always viewed it as a threat to privacy. I’m not the only one.

All that’s happening now is one of many adversarial uses of this information is being made known to people who Don’t visit slashdot and hacker news.

Facebook was going to use your personal information to make money for itself. It was going to be harmful to you.

This has happened.

The business model for other parts of the web is the same. Those parts are going to cause similar issues but at smaller scales.


>Otherwise there was an obvious way they were going to make money and that was Ads. Powered by data harvested about you.

Google has the same model, but takes data privacy more seriously than Facebook does. It's disingenuous to conflate any data harvesting with the most unscrupulous and casino-inspired iterations of it.

Arguably, the micro-targeting model of advertising Facebook goes after is more faddish than effective anyway. Google uses the data analytics as much to build more salable products as it does to serve advertising, so the volume of collection serves some kind of function. But facebook's actual business need for most of its data is dubious at best.

>Facebook was going to use your personal information to make money for itself. It was going to be harmful to you.

It's not really the use of personal information that's the harm though. It's the addiction mechanisms they leverage to make you keep giving them personal information and the lack of protections or responsibility they put around it. It's not at all "obvious" that it's going to be harmful to anyone, and arguably it's not even harmful to anyone individually so much as harmful to society and the body politic generally. You can't have those problems at "smaller scales" because those problems don't exist at small scales, they're an emergent property of scale itself.


Google in its current state. And only because their current iterations of social networks have failed.

Further google does far too much as it is. I don’t want my emails parsed to figure out what ads to present, or my uploaded files.

The addiction mechanisms are a separate class of harm, and I recognize them from when games, far more than social media. They came much later, and were not part of business models back when Facebook etc. were created.

The misuse of personal data to harm people and privacy is a known idea. For example, they were warned about back in the day by people such as Huxley or Orwell.

Either way, this is an odd conversation.

These harms were clear and known.

If you are saying You didn’t see it, then that’s fine many others didn’t see it or believe it.

If you are saying I couldn’t see it, well I have long ago acted on what I saw and believed and oppose Facebook and other similarly privacy invading systems. I’m not happy with the old British surveillance state from the 90s - and I’m neither British nor American.

If you are saying the specific detailed break down of the harm to be caused was not known- sure. I cannot tell you which incident will finally trigger X event but I can predict that you won’t have autonomous cars, (since I see people driving in the wrong lane in the opposite direction of traffic very often where I live)

And a final point - the addiction mechanisms are just being cross pollinated from other systems, just like AB testing and other research helps ensure people stay on web pages.

Your deeper problem is that advertising, once a tool, has now become and end in and of itself.


Obvious to some. When consumers receive a valuable service or product for free, with no obvious strings attached, they consume it. I don't think most people would preoccupy themselves with the provider or manufacturer's business model. You are savvy, and that's good. But others deserve consideration and protection.


It's rather simple:

If something's too good to be true it's practically always neither good nor true.


You target audiences (segments), and FB gets ad revenue.

The problem is that via apps you get the user info directly. And their friends' info too. That's how Tinder can show how many mutual friends you and the profile you're looking at have. That sounds nice, but as the data left FB's servers, there's no way that users can control what happens with it.


Facebook doesn't sell data, they ads.

Those ads are targetable by categories determined from private data, but Facebook doesn't give anyone else the data, that's literally why people would purchase ads through their exchange and not through another exchange.

Google has the same model, they don't sell your data, they use your data to match you with advertisers through a fairly opaque interface that lets advertisers reach the categories of people they want to reach, without revealing the data to allowed them to put someone in that category.

Kind of like how your Mom will set you up on a blind date based on what she knows about what both parties want, but doesn't disclose all your information because she doesn't want anyone to be upset with her.


Quite a few people I know signed up with FB when they were in high school. Some may have thought through the business model, I suppose.


Oh no, the people at large don't understand the complicated interactions between law and technology, those fucking idiots, how dare they


> Facebook has acted on this and intentionally made hundreds of thousands of people fall into a depression, just to see if they could, and then they bragged about it.

That's interesting. Do you have a link to source?


Likely referring to this article [1].

However, the claim that they made "hundreds of thousands of people fall into a depression" is pretty exaggerated - FB demonstrated it could influence the mood of users via changes to the algorithm, but the overall effect size was small (Cohen's d = 0.001).

I am very critical of FB and this type of research, but don't know if anyone became depressed over the experiment. It's possible, but wasn't demonstrated.

[1] http://www.pnas.org/content/111/24/8788.full


Staging an experiment of this nature without informed consent of the participants is unethical, irrespective of whether the technique proved to be effective.


"It's easier to ask for forgiveness than permission" is a very popular mantra among the boy wonders of Silicon Valley.


Was the mantra of Grace Hopper, who I wouldn't necessarily describe as a 'Boy Wonder', however she used it in the context of 'borrowing' equipment and machine time from bits of the navy and academia.

She was being dry witted about tatting stuff, not recommending human testing.


When does any company get consent for changing its product or advertising and measuring the response?

I never gave my consent to McDonald's for putting up a billboard or changing their menu or their recipe.


This is a very good point in its own right (though not a point for or against fb doing this per se). Not sure why it appears in grey.





> Signed agreements do not prevent your information from being used in other ways. That's insane, literally.

Yes but just two sentences later in the post...

> And our partnership and engineering teams approved the Facebook experiences these companies built.

They're saying they inspected these apps. You jumped to "literally insane" so fast there.


Nope, you know that once the data leaves your server it is out of your control. These companies could have fooled fb representatives quite easily if they wanted.


If you're not as up in arms about the access that web browsers have to user's facebook data, then I'll assume your response to this issue is just another instance of hn's fb derangement syndrome.


This is not an accurate way to view the situation.

Here's the situation. The user has purchased a device (such as a smart phone) from a device manufacturer (such as Apple). The device runs an operating system (iOS) and a whole boatload of software, including telecommunications equipment necessary to operate cellular radios, the higher level OS, then user-installed applications and such.

If you don't trust the device manufacturer, then that's the ball game. If you assume that the device manufacturer is untrustworthy, then you should assume that it can steal the user's data just as easily from a web browser (when the user visits facebook.com) as it can from a native application.

When the user decides to sign into Facebook (or any website or application) from their device - the manufacturer-supplied device - the user is trusting the manufacturer with their data. Whether the application is delivered through a web browser, or is a so-called 'native' app installed through an app store, doesn't substantially change the trust relationship between Facebook and the manufacturer. If you believe the manufacturer is doing nefarious things, then you should assume the manufacturer will install a spyware web browser that siphons data out of every web page that you visit. Partnering with the device manufacturer to allow them to distribute a Facebook-branded mobile app does not substantially grant them more trust than they would have already.


> Signed agreements do not prevent your information from being used in other ways.

Why is this, though? How is it different from any other legal agreements?


> How is it different from any other legal agreements?

Suppose I tell you a secret. You promise only to share it with others who will keep it a secret. You share my secret with Bob. Bob promises he won't share my secret. Bob shares my secret.

You broke your promise. You tried not to break it. But break it you did.


> You broke your promise.

Then you face the legal consequences for doing so. Thats what prevents people from breaking them in first place. Unless you are arguing that all legal contracts are useless because they can be broken.

I am not asking about the logistics of leaking. Yes ofcourse its physically possible, I am asking why the consequences of breaking a contract don't matter in this specific case.


> I am asking why the consequences of breaking a contract don't matter in this specific case

Technically speaking, no contract was broken. When users signed up for Facebook, they clicked a button pursuant to which Facebook indemnified itself from everything under the sun. (It is unclear how enforceable those terms are. EULAs, for example, aren't very enforceable in the U.S. [1].)

Facebook did agree not to do these things with the FTC. (It also made some noises to the Congress, but I don't believe those were under oath.) And there are a lot of things it may have done which are illegal. But enforcing the law and consent decrees requires prosecutors to prosecute. We're waiting for that.

[1] https://en.wikipedia.org/wiki/End-user_license_agreement#Enf...


> Technically speaking, no contract was broken. When users signed up for Facebook, they clicked a button pursuant to which Facebook indemnified itself from everything under the sun.

I am referring to this

> These partners signed agreements

not users.


It is difficult to say anything about the partners' agreements without seeing them. The problem these agreements exist in the first place. Instead of Facebook writing a Windows Phone app, they had Microsoft write the app and paid Microsoft with users' data.

The difference would be ignored if it hadn't been replicated in the Cambridge Analytica scandal. People are reasonably concerned about how much data Facebook is giving these third parties and how diligently it's ensuring they aren't retaining it.

These concerns are heightened by the existence of these agreements contradicting with (a) the FTC consent decree and (b) Zuckerberg's statement to Congress. It may also be material in the ongoing biometrics lawsuit in Illinois.


> paid Microsoft with users' data

That sure sounds like a misrepresentation of what happened here. From the article: "These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences."

I don't think I would represent facebook as "paying" google with user's data just because Chrome has access to user data when a user visits the facebook website.


Contracts are not "useless", but they are also not "preventing" someone from doing something just because they impose a penalty. If you want to see proof: visit a court!

Contracts motivate you to adhere to some previously-agreed-upon behavior, and the degree to which they motivate you depends on how much the penalty actually penalizes you. In the case of large corporations, seemingly big penalties are often actually negligible in comparison, with the resulting motivation to adhere to the contract being negligible as well. That is one important reason why it is deliberately misleading to use the word "preventing" in the context in which Facebook uses it here. The other one is simply that it's factually wrong, even regardless of the size of the penalty.


How do you prove that Bob leaked the information? How do you prove that Abe told Bob? How do you prove that FB was the source of information to Abe? How do you demonstrate that FB, Abe, and Bob did not take all good faith efforts to keep the information secure?

How do you show any actual damages?


>How do you prove that Bob leaked the information? How do you prove that Abe told Bob?

Data watermarks, small changes (little tiny bits) that allows to track the leaks.

Also audit logs, you know who has what, etc. The topic has been hotly discussed with GDPR as the leaks have rather harsh consequences (beside being dragged through the mud, erm newspapers).


> How do you show any actual damages?

All that presumably is figured out before signing a agreement. Thats the whole point of an agreement vs pinky promise.


Even if they broke their promise (EULA), what remedy does that EULA provide for FB breaking their promise? What legal consequence is there?

The burden is on the plaintiff to prove FB violated the agreement, that violation caused damage, and the value of that damage. The EULA doesn't say the user gets a pay out if FB admits (or is obviously caught) non-compliance with the agreement.


Right. And they should be punished harshly for it, so that everyone else is much more careful about who they share that stuff with in the future.


None of this changes the fact that I want Likes on my vacation photos, kid photos, and pics of my food, all of which are of better quality than yours.


This was a satire by the way


I think the argument is that saying "I promise not to murder someone" does not in any way prevent said party turning around and murdering someone - signing a contract does not prevent these companies from violating the terms of the contract.


Not really sure I understand what the point is here... Disclosure of personal information to third-parties under NDAs and that type of contract is normal business practice, banks and health providers do it all the time (with much more sensitive data than what you typically find on Facebook). So as a society we are fine to use contracts and share this information.

Do you think that Facebook's data is more important than the information banks/health providers share and should not be shared even if proper contracts for its use are in place?


There is no comparison between such a "I promise not to murder contract" and a contractual agreement between two companies regarding data integrity.

Companies share sensitive information all the time, this is not a new thing, contracts are real things, breach of contracts is a big deal, consequences can be huge.

Both FB and most of the entities with whom they would have shared data are huge, and there'd be way to much risk in not being smart with the data.


If data isn't protected unless it's physically impossible for privileged persons to access it, then I'm afraid essentially no data is protected.


> signing a contract does not prevent these companies from violating the terms of the contract.

So whats the point of any legal agreement then?


Punishment after-the-fact to statistically disincentivize behaviours we want to disincentivize. It doesn’t prevent the thing from happening ever, it just makes it so that when it does, inevitably, happen, the system pushes back against the offender. This makes rational actors less likely to do things they could be sued for. But 1. not all actors in a market are rational; and 2. sometimes it’s rational to be “evil” if the upside is large and the chance of getting caught is very small.

Consider also: traffic laws. The punishments make most people obey them. There are still drunks, malicious drivers, just plain bad drivers, and people doing U-turns on the highway when they think no cops are around.

If you want an assurance that you won’t be hit by one of these, you have to just avoid traffic altogether.

If you give someone else an assurance that e.g. their child will never be hit by a car, then you have to never take said child on the road.

Facebook gave an assurance that our data would never cross paths with a bad actor. The only way to do that is to never take the data to where the bad actors might be.


> Facebook gave an assurance that our data would never cross paths with a bad actor.

Don't downvote for asking me this, asking in good faith. Where did they give that assurance?


The root contention here is over the word "prevent". You questioned this:

> "Signed agreements do not prevent your information from being used in other ways."

But I guess you now understand why contracts do not prevent it.


I was not asking why an agreement is not able to physically get between the person and his computer and tie his hands from data being used in other ways. Why would i possibly ask such a question :D

I was asking why consequences of breach of agreement is not a strong enough deterrent in this case .


The contention of the comment to which you originally responded was with the use of the word "prevent." If they had used a word like "disincentives" there wouldn't be an issue. I thought that was clear from that comment.


I don't see what the contention is. My lease agreement "prevents" me from putting nails in the walls to hang pictures. I "prevents" me because the fines are not something I'd like to pay, it would be a "disincentive" if the fine was only 10$.

What is your definition of "prevent", like physically stop from doing it?


You know, that's actually a damn good question.


In theory, discourages certain behavior with the threat of legal action. In practice, this doesn't work sometimes for various reasons. What if you're 95% certain you have the resources to crush your opponent in a legal battle, still profiting handsomely net of the legal fees you incur?


The law does not prevent anybody from murdering you with an axe. The law forbids people to murder you with an axe.

Once you understand the distinction there, you will understand the answer to your question.


The agreement is between FB and the next party. If FB is suitably embarrassed, they can go after the next party.

That doesn't help the users much, though.


> how else did you expect us to make huge sums of money

Your tone is as if to imply that nobody but the company benefits from making 'huge sums of money'. There are employees, vendors (selling goods and services), stockholders (retirees?), governments (taxes) and so on. Why the idea that earning money is somehow not the purpose of a company? It is at the core.


But none of us care if they make huge sums of money or not. I fail to see why I should accept them being shitty because "they created shareholder value."


> That's insane, literally.

Metaphor is literally dead.


Sadly what FB is doing is using contracts to defer the blame in-case of legal action. Shitty but standard practice.


> from being used for any other purpose than to recreate Facebook-like experiences

> they consent to having psychological experiments performed on them with no further notice

Sounds like the third parties delivered!

The difference between Facebook and a casino is more about the physical limitations of their storefronts than business models.


... and thus begins a new series of regulations designed to pierce the plausible deniability of "terms of service". People complain about GDPR and other regulations, but they are created to close loopholes that were abused by companies like Facebook.


There's probably nothing FB wants more than to be regulated into a monopoly


So long as we don't have to log onto to FB to file our taxes or something, how much harm can the regulator really inflict on the general public? Assume it ends up being a regulator that's really talented at harming the general public, like FCC...


> how much harm can the regulator really inflict on the general public?

They can make regulations on social networks that are infeasible to comply with for any company not making 10s of billions of dollars a year, effectively cementing facebook as the only legal social network. Whether you consider that a harm or not is up to you.


I think that's an insane stretch of what's really happening.


Sure. It was a reply to a (dismissive) question asking what's the worst a regulator could do... it wasn't necessarily meant to be a reflection of what's really happening.


The harm of (captured-)regulated monopolies is real, but we ought to save this argument for important targets like the Daughters Bell. I offered an admittedly far-fetched example of a possible harm from regulating FB, in the hopes that you could come up with a more plausible one. No such luck!


> People complain about GDPR and other regulations, but they are created to close loopholes that were abused by companies like Facebook

I complain about GDPR because it buttressed Facebook's position [1].

[1] https://www.wsj.com/articles/how-europes-new-privacy-rules-f...


It seems like we're seeing how scalable the mantra of "ask for forgiveness, not permission" really is.

Not Facebook scale, apparently.


Been fine so far.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: