- Obviously in order to integrate FB functionality into a Mobile OS UI requires an API to render the data being displayed.
- If your phone has a home screen widget which shows friend data, obviously that friend data came from a Facebook API call.
- If you type your Facebook username and password into a settings dialog in order to enable that home screen widget to function, that’s pretty obviously consenting to enable the functionality.
- There once was a day when we demanded our social media platforms to provide these “open access” APIs to specifically allow for accessing our own social feeds on our own devices.
We trust our user agents to render our private information on our devices. Sometimes we even trust our user agents to leverage network services to improve the on-device performance (i.e. Amazon Silk)
If and when user agents exfiltrate our personal data off device for data mining purposes (i.e. Chrome Omnibar) it should be disclosed and opt-in.
It sounds like Facebook provided an API to device manufacturers to allow them to deeply integrate social features on device. This has historically been considered a Good Thing. It sounds like they put together a legal agreement that required these device manufacturers to take due care in implementing these features to protect user data. Also seems like historically this is what we would call a Good Thing.
When you enter your username and password in order to view your Facebook feed — that’s called a “user agent” and that’s something appreciably different than a third party quiz app sucking in friend feed data.
However, Chrome Omnibar aside, user agents are not expected to exfiltrate data in any way, and if that occurred, that would indeed be a story I’d like to read, and my ire in that case most certainly wouldn’t be directed against Facebook.
If the device makers applications merely fetched data on behalf of users and displayed it on their machine it would be no more of a problem than my email client.
In the first place it looks the apis provided access to data the users had opted not to share in the second place the times article seems to state that partners partook of that data themselves rather than merely acting as a client example
"Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers. A Facebook official said that regardless of where the data was kept, it was governed by strict agreements between the companies."
Before deciding if facebook's response was reasonable did you even bother reading the times article?
This statement from the article is meaningless, as many legit things might mean "third party storing data" as:
1.) Storing and editing contact imports, if user wants. This is technically friends data, but it's my contact list.
2.) Proxy-ing and caching: we are talking about shitty phones mostly before android and ios were mainstream, so "store on their servers" could be as simple as an artefact of implementation of non-html facebook client app on that shitty phone. Example of such artefacts: custom notifications channels, caching, downscaling of images. I think that blackberry did proxy all of their communications through their servers (not 100% sure), so if Facebook was available, they probably also had to store something on their servers.
The journalist didn't even attempted to distinguish between "my device is calling this api" and "company is doing requests" and resort to conflating those two and ambiguous "some partners did store users". This is example of journalist trying to create a story instead of getting to truth. If they would dig deeper, and try to figure out which companies stored data, what type of data (was it contact import, caching, or did they download full graph?), and what was purpose, it would be valuable article.
It's hard to see any way in which making users choose between sharing their info with every dodgy quiz their friends use and forcing their friends to install the Facebook app and let it get its tendrils into their devices in order to interact with them would've been good for privacy, but this is exactly what the NYT is insisting Facebook should've done.
I think we're trying to make this different in hindsight, but APIs for this sort of thing are the replacement for "please give my app your Facebook password". I hope everyone remembers how Facebook used to ask users for their GMail password and how Mint (still) asks users for their banking passwords.
And if what you are doing is "giving an app access to my account", then letting them see everything you see is a pretty natural API.
I think over time we have realised that users want more fine-grained permissions so they don't have to make such difficult decisions of whether they "trust" an app, but I know that as a developer and user I get pretty annoyed when other software cannot interact with an app for me because the API is too limiting.
This type of thing in journalism really concerns me with the recent moral panic against 'fake news' and the attempts to create systems which define it. When what is 'true' or factual or not is so often subtle and easily spun/twisted. Which is ironic in this context because Facebook and NYTimes are both leading players championing these efforts.
In defense of Mint, most banks are severely behind the curve when it comes to authentication, and there would be no way to provide that service otherwise.
Where the financial institution has a better API, Mint happily supports it. (Interactive Brokers, IIRC, is an example of this.)
And we still demand them for other types of apps.
If I have a Mac or iPhone, I can connect the Apple-supplied Mail app to Gmail with IMAP or POP or whatever it uses. If I have Windows with Outlook, I can connect it to whatever mail server too. And this software gets access to all the content of my emails, which is private data.
Likewise, on a smartphone, I can install a third-party app to access Hacker News or Reddit. Because both of them have an open API. (In fact, for a long time there wasn't an official Reddit mobile app, and they encouraged you to go third party.)
Because that's where the ground of your argument sits: because of server-side "rendering" the data must be sent to the servers of the software author, in this case Huawei but doubtless many others, instead of to the device that actually displays the information.
... and that's exactly why everybody's doing cloud based software. Because it allows them to abuse this. And once they've got the customer by the ... they ask money for that.
Cloud based software is how the author of a todo-list app can hold users' data hostage more effectively, more detrimetentally to the customer than Microsoft's monopoly that was blown up in the cause of justice.
The NY Times article makes it sound like Microsoft was allowed access to my data. They were not. They were allowed to create an app that had access to an API so I could access my data from the Windows Phone device.
I don't know if that happened or not, but the way this article reads is very misleading and several of the vendors that they listed did not actually have access to the data.
I personally setup FB to do contact syncing with my Google contacts so I could get the FB profile pictures for all of my contacts. Does that count as Google improperly using FB user data? I don't think so.
If NYT are correct then we can kiss goodbye to APIs that are used by any services that are not explicitly written and signed by the service provider. In the extreme that means you won't be able to log in to facebook on the web, only via a facebook app, because there's no guarantee that a 3rd party web browser isn't stealing data. That goes for any and every service dealing with personal data, and we pretty much lose the open web.
Taking that even further, if you really want to avoid the need to trust anyone, you'd need to run an operating system created by Facebook running on hardware created by Facebook. Otherwise, the OS and hardware vendors of course have access to your personal data. I guess it's all a spectrum, and certain hardware/software vendors you just need to trust (or at least assign blame to them, not Facebook, if they maliciously steal your data).
Some might argue that centralized walled-gardens like Facebook are actually a risk to "the open web," rather than contributors by allowing access to some data via an API.
See how that's a bit different? One of them is offering public information (say, stock quotes), publicly. One of them is offering private information (my sexual orientation and date of birth) to tons of third parties I didn't consent to having that information shared with.
I don't see how the situation is actually different now: if you run the official Facebook app on your Galaxh phone then Samsung could scrape and exfiltrate the data anytime it wants. It is Samsung's fault if they do it not Facebook's.
> Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.
> Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.
Maybe the confusion is that FB isn't treating these integrations as "third parties" since they're supposed to be pseudo-official FB apps?
Edit: Thinking about it further, this is the crux of the matter, isn't it? Whether or not an FB-approved "mobile experience" counts as an official FB app or a third-party app?
If your phone saved it or your provider saved the data in a server somewhere without your permission and used it for other purposes, that's a pretty big deal. As it is, this should only be a shock if you have no idea how the modern internet works.
In this scenario, no third party had access to anyones data, right?
I don’t really care; if you install the facebook app you deserve whatever it does.
Facebook officially blessed other platforms, allowed them to say you were signing into Facebook, and then allowed those platforms to have access not only to your data, but your friends exhaustive data. Facebook was aware of this, explicitly allowed this, and their only safeguard was vaguely worded policies with their partners.
Even if the partners haven't stolen any data or broken their policies, Facebook intentionally and knowingly gave my data to a third party without my consent. Presumably now all 66 of these device partners have my full data, and I did not sign in on their device or otherwise authorize it.
Only for purposes of allowing the user to interact with Facebook. The platforms themselves couldn’t use the data for anything other than supporting user actions.
This is perfectly reasonable and is why APIs exist.
The issue, which has not been repeated, would be if those platforms retained data without user consent and used the data for other purposes. That would violate the contract they signed with Facebook and be bad for use privacy.
To a non technology person, these two very different things sound similar (Facebook let Microsoft access use data). But this is meaningless without more context.
Is there any evidence that the API was actually used to arbitrarily request and exfiltrate other users profiles? If not, this is strictly only "Facebook allowed other developers to write Facebook apps": there is no way that could possible work without offering an API with this level off access to the app.
I friended my cousin because she is my cousin. I didn't think about what kind of device she uses. I certainly didn't intend for her to hand my personal information over to a company that I don't know.
The problem is in offering an API that makes abuse trivially simple and putting all of your faith in the "click agree" model of expecting developers to (a) understand and (b) actually comply with your policy. Particularly when many of the developers in question are non-native English speakers, or even if they are native speakers and don't bother to read the policy carefully.
These apps are just alternative Facebook clients. Don't we _want_ a system where you can use different clients to access your own data?
If the problem is not trusting the client, well, that'll be a problem for any such system, even some utopian fully open, distributed and federated social network - until you build an open source client yourself.
One thing is allowing the official FB client to see those data (FB has those data anyway), another thing is to let third parties see them and possibly store them on their servers and not only on our devices.
This is different from email and email clients. First, the expectations are different: if I send you an email I expect that you can forward it to your friends or anybody else unless I explicitly ask you not to. Second, local clients don't send mail to their authors, same for address books. Third, we know that Google and others can see most of our mails anyway, because most people use only webmail and messages are stored on the servers of those companies.
Finally, FB didn't tell us about this API and what it can do. It's this secrecy that's hurting them IMHO. Sure, I concede that they couldn't foresee the current climate around their company when they made the choice not to advertise it, or we wouldn't be in this situation now. But we're here also because of chain of bad choices from their side.
My suggestion for a social network of the future is to have a single API, used also by the official client. The servers must not trust any client, which is the usual thing we do in web development, and give all them the same level of access. It's up to the user to decide if they want to use the official client or one of any third party.
> The Blackberry used by the Times got data about the friends of the journalist even if those friends didn't consent to that.
They did consent to that - by becoming your friends! Are you saying that every time you open your friends' Facebook pages, a notification should be sent to those friends requiring consent?
I think perhaps you meant something else - the Blackberry got data about the friends even though the user did not consent to that:
> The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.
This is also questionable. Would you say that Chrome requests and receives prohibited data, when you use it to browse your friends list? We are talking about the distinction between client(full access) and third-party app(limited access). The cases described in the NYT article seem to be clients.
Further on your point:
> This is different from email and email clients. First, the expectations are different: if I send you an email I expect that you can forward it to your friends or anybody else unless I explicitly ask you not to.
You seem to be making an argument against your claim here. The parallel would be, if I accept your friend request, I expect that you can see my data and use it(i.e. by browsing your friends list on the Facebook site, or the Blackberry client).
> Second, local clients don't send mail to their authors, same for address books.
Perhaps, but doesn't that hinge on the definition of "local clients"? For example, my Outlook definitely shares information with a cloud server. Was "The Hub" an unknown/unexpected feature of the Blackberry client?
> Third, we know that Google and others can see most of our mails anyway, because most people use only webmail and messages are stored on the servers of those companies.
Are device manufacturers not included in those "others"?
The email client parallel with Google is even more in Facebook's favour. For example, is Mozilla stealing data about my friends when I use Thunderbird to access my gmail account? What if I explicitly ask it to store my emails in a "hub" of sorts, so I could sync them between PCs?
> Finally, FB didn't tell us about this API and what it can do.
I think this is the strongest point, but it's important to note that we are judging Facebook's old decisions by our new increased focus on privacy and user-focused control. As one user gave an example, we used to give our passwords to sites back in the day, so they could integrate with other services(actually, this still happens in some apps..)
> My suggestion for a social network of the future is to have a single API, used also by the official client. The servers must not trust any client, which is the usual thing we do in web development, and give all them the same level of access. It's up to the user to decide if they want to use the official client or one of any third party.
But isn't this literally what is happening here? The "secret" API does not have access to any data the "official" one(used by the site) doesn't(at least, the NYT does not present any evidence to that effect). You also seem to access it by giving your credentials to the "third party client", i.e. no "special access".
- How on earth does Facebook justify giving direct API access to information that users have, in every setting possible, marked as private?
- How on earth does Facebook justify offering deep API access on users who have literally disabled API access to their data?
It's ridiculous, and it's more ridiculous that users here are conflating "basic API access with a sane permissions system to give you control" with "deep API access with no privacy controls whatsoever that openly defy existing privacy controls".
It's not acceptable, and frankly, this is EXACTLY why government regulation of data online isn't a possibility, it's an inevitability. Because when the penalty for ignoring the user's selection of "DO NOT MAKE MY DATA AVAILABLE OVER THE API" and "DO NOT MAKE MY DATA AVAILABLE TO FRIENDS OF MY FRIENDS" etc is now billions of dollars in damages and potentially criminal charges for executives, magically, these violations will stop occuring.
Until then, everything in this document is either a lie or sufficiently legalese'd that it's worthless, just like the lies that they told to Congress, just like Zuckerberg's lies to E.U. as well.
I cannot wait until it is a crime to share private user data against their will. We live in a wild west and the past 10-15 years are proving just how much sheer damage we have caused in society by not criminalizing disrespect of digital privacy.
Here how it seems really non-crazy to me as a programmer. Let’s say I make phone with a Linux OS. I want to make an app to let users check Facebook on my weird OS. I ask Facebook, they say no. I then build an app to call their API and let users do stuff.
All my app does is call the API and show it to the user. In order for the app to function it has to use private data. But the data are not retained or analyzed, just used to operate the app.
This is how all 3rd party apps work. The APIs were not for the general public, but only to trusted third parties. The alternative is that only Facebook can build and show apps.
So this is pretty much how APIs have worked forever and why you only install apps and log into apps you trust.
It would be like complaining that Adobe Reader can read private files from your laptop via Windows/Mac APIs. Of course it can, this is good. It’s bad of Adobe were to misuse the data.
It would be like being concerned because both you friends both use Chrome to access your individual, private bank accounts. Google accesses your private information, with your permission, to display data on your screen. It’s only bad if they misuse that data.
However your reply completely fails to grasp the problem, like so many others. It's not the fact that I type in a user/password and get my data, it's that, I type in my user/password and Facebook authorizes access to as many as 10,000 to 100,000 other accounts data, based on proximity to my account.
That's because you're just making up a problem that doesn't exist. The replies are all talking about what is actually happening, not your fantasy world. As the article states: " friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends".
I don't understand why Apple Inc. needs access and permission to download and store my friends' data on Apple Inc. servers. That is what the NY Times reported, and Facebook has not directly denied it here.
Edit to be clear: I'm just using Apple as an example company.
I can think of a handful of Facebook-like experiences that would require an OS/device provider to store data on their servers, especially considering the constraints imposed on some devices (particularly older ones) and some OSes (particularly older ones and iOS).
Let's take a concrete example. Let's say that Apple wants to support contact syncing across devices. It also wants to support contact importing through fb's device integrated api.
Now, let's say that Apple actually implements that in such a way that it's encrypted end-to-end. Even in that case, Apple may have needed permission to store that data on their servers under whatever agreement the companies had and the Times could have written their story.
But I feel like that case isn't that interesting. Let's consider the case where your contacts are stored unencrypted on Apple's servers, but only for 3 hours while actually syncing to a new device (kind of a silly approach to the contact syncing case, but is likely more reasonable for other things). There may be a reasonable argument against that, but I think that most people wouldn't agree with it. Also, that argument would apply much more strongly to Android's contacts permissions (which don't require any sort of contract around how developers use/store that data).
Of course not. Because the fact that a user agent like this has widespread access to the data that the user has access to is expected. The OS itself also has at least that same level of access (as it has control over the behavior of the user agent). We just all seem to assume that we can trust those entities to not misuse the data (in some cases the particular software we run may have privacy policies that cover how they use that data that we necessarily give them access to).
> There's more going on in this story than a simple browsing proxy.
There's nothing in the report that indicates that. In fact, most "simple browsing prox[ies]" have no data sharing agreement limiting how they use the information from your browsing, in the cases in this story they were all limited to "signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences." and even further "our partnership and engineering teams approved the Facebook experiences these companies built". In addition, it sounds like in most of these instances, no data was even accessed or stored off the device (i.e. they had much less access than a proxy).
I don't need to trust that entities will not misuse my data; I have legal documents on my websites that set out how entities may use the data that they download from my sites. The terms apply to my users and to any intermediary technology providers.
For the Silk browser, I don't need a data sharing agreement with Amazon because my terms say that service providers like Amazon may not copy and store data from my website for their own purposes. Amazon Silk can access my site only for the purpose of helping the end user visit my site. I expressly exclude data sharing from my relationship with service providers and users.
This is a standard part of website terms and conditions; here's an example from Facebook's terms of service:
> You may not access or collect data from our Products using automated means (without our prior permission) or attempt to access data you do not have permission to access.
And from their platform policy:
> Data Collection and Use: If you are a Tech Provider for an entity, comply with the following:
> a. Only use an entity's data on behalf of the entity (i.e., only to provide services to that entity and not for your own business purposes or another entity's purposes).
Data sharing agreements are only necessary when another entity (i.e. Amazon the company) wants to store and use data separate from, and in addition to, the service they provide to end users.
The existence of a "data sharing agreement" is proof that device manufacturers were collecting and storing user data, not just facilitating user access to Facebook. That's what "data sharing agreement" means. Further proof is that Facebook explicitly said that some companies collected and stored FB user data.
My iPhone is also syncing with iCloud.
Now my friends “private information” are on Apple’s serves via my iCloud backup.
The user logs into Facebook on the device. This log-in action the part of the user is effectively permission to share data with the device.
> - How on earth does Facebook justify offering deep API access on users who have literally disabled API access to their data?
When the user logs into Facebook on the device, they are giving permission for their data to be transferred via an API to that device.
I do not understand your outrage.
As well as the data of 10,000 to 100,000 connected users as both friends and friends of friends have data pulled without any checks by the third party.
"When the user logs into Facebook on the device, they are giving permission for their data to be transferred via an API to that device."
When the user selects to have their data limited to friends only, and not friends of friends, then additionally disables API access to their data explicitly, no, they are not "giving permission for their data to be transferred via API".
They are quite explicitly doing the opposite.
My outrage, stated multiple times and oddly invisible to the users here, is that my user/password doesn't authorize the third party to access 10,000 to 100,000 other users private data, and those users personal settings should override any third party data hose utility.
How naive is Facebook really?
Inside the Bubble at Facebook
"Management will laud what employees do, show them selective facts that justify their views, and hire/promote those who behave similarly to them. Employees in isolated teams with training in a single function may not realize the broad, unintended effects of their company's work. They'll assume the best of their coworkers that they've developed friendships with from working in the trenches, without inquiring into the larger effects they're having."
Would love feedback.
X vs the Manhattan project isn't really a good test for the ethic of working on X.
You don't need to work hard to motivate people to start creating the atomic bomb when the Nazis are doing the same thing and a good chunk of your team fled the Nazis.
You don't need to work hard to motivate people to keep working on an atomic bomb when the alternative is a few million people being killed in the invasion of a bunch of islands that the inhabitants have pledged to defend to the death and thus far made good on their promise.
The Manhattan project is much less morally ambiguous than recent tech scandals (the words "recent" and "scandal" relative to the general population, people who follow tech have seen this stuff coming a mile away) because the cost of inaction in 1945 was so much higher than today. It's not like anyone was working hard to make Facebook IPO happen because they thought it would slightly reduce the chances of their relatives dying half way around the world.
On the one hand, Facebook's got a point, that if you want to be able to use Facebook on a device without going through the Facebook app or the website the device needs to be able to authenticate onto some sort of API.
On the other hand, the NYT article makes the claim that the makers of the devices got access to the Facebook data, writing "Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers". However, Facebook never followed up on that in their article, just pointing out that if you are logged into Facebook on a BlackBerry that the BlackBerry can make the same requests you could if you were logged into Facebook through the web browser.
The question that matters which neither side addresses well is how much of that data makes it to device maker servers (for a while, the NYT homepage was claiming 'dozens' but they removed that and it doesn't appear to be substantiated in the article).
There are related worries with the device having access to the Facebook data itself, but at that point you need to start worrying about malicious activity by device makers in general. E.g. will my phone start sending my web history back to its maker as well? my bank account numbers?
And the whole time they're pointing at facebook as the corrupt ones.
Care to elaborate on why this data should be allowed to leave the phone? (and be allowed to be collected regardless of user's privacy settings)
For example, when they say "Facebook Gave Device Makers Deep Access to Data on Users and Friends", they mean Facebook let them write software that could be run by their users and give those users access to information their friends had shared. There's nothing technically untrue about this, but it gives a false impression about what information Facebook made available to who. It makes it sound like Facebook gave a big fat chunk of user data to device makers as a bribe to include Facebook on their devices, when in reality we're talking about giving their software the access it needs in order to actually provide access to Facebook in the first place.
I've seen a lot of very confused comments here and on Twitter as a result.
These two things are not mutually exclusive. They effectively did both, even if the intention is unclear. You must have forgotten how bundled apps on OEM Windows worked.
Facebook makes it sounds like the user is in control.
FB says: ‘Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.’
This is the only disagreement as I can see.
If FB is correct here, then the whole thing is a non-issue. Some API giving access to the data otherwise available through a web browser is a good thing.
On the other hand, if an API provides access to information that isn't accessible through a web browser (and doesn't show in the official FB app), then it's reasonable to loudly complain.
FB saying "we didn't share photos" is trying to say "well, we didn't give away everything"
- You cannot sync your address book contacts with facebook in order to get profile pictures (you used to be able to do this)
- You cannot write an alternative Facebook client (with a better timeline, no ads, ...)
- You cannot write a complete bridge to another social network (e.g. implement Federation)
- You cannot build a P2P (serverless) application over Facebook. E.g. a chat, or something to send a file to a friend on Facebook, or to initiate a TeamViewer-like session.
All of these are either explicity forbidden by policy, or have been closed by specific changed to their API.
To be honest, I don't care too much that people were able to scrape data I put up voluntarily. The German Facebook clone was called StudiVZ - Student's Directory. This sounds a lot like a telephone book, and that was the mindset and expectation I had when signing up to Facebook. Create and curate a profile for friends and friends-of-friends to see, and I didn't care much if others saw anything, because it was irrelevant to them. I mostly cared about meeting people - being found, and finding other people. In this light, I'm more concerned about data freedom than data protection. While the latter is important of course, it's unfortunate that the former is always forgotten.
Mark got away with telling Congress ~"we don't share with third parties" and now they're saying Blackberry's not a third party.
If it's all okay, why didn't Mark come clean? and tell his Congressmen? He had a chance to explain this arguably-harmless behavior, but he chose to sidestep it. Why? Did he not understand the question?
It's fine that this data-sharing is maybe reasonable. It's not fine that Mark withheld this from Senators. This is exactly what they were asking about, and given the chance to explain, he chooses silence. He gets to avoid the public debate while the techies argue amongst themselves.
(Also any OS/kernel manufacturers who get access to your data through your usage of the OS or TCP/IP stack).
Facebook is sharing your data with Blackberry in the same manner.
I don't even see a consent difference, since you need to explicitly consent to sharing your data by entering your Facebook user/password into the Blackberry UI app. Similar to how you write your user and password into Chrome, thus "sharing your data" with Google. It also doesn't seem like there's any evidence that the UI apps were intended to secretly collect and store data(Was "The Hub" an unknown feature?)
I don't believe the evidence presented here invalidates Mark's answer. His answer would have been meaningless if he had taken the definition of "third party" put forward in the NYT.
They specifically mention that they were able to use BlackBerry Hub with a reporter's account to query Facebook data. The article never states whether BB Hub connects to Facebook directly, or whether it receives data from a BlackBerry-operated service.
The latter case is clearly user-hostile. If BlackBerry (the company) can read user data and Facebook claims not to allow 3rd-party access, then that is bad, and it should be treated as a breach of the user's trust.
The former case is more complex. As a user, I care a great deal that I can access Facebook using my choice of browser, whether that's Chrome, Firefox or Edge. I shouldn't be limited to the top three either. Some users may prefer a browser that works with their screen readers, others may prefer the built-in browser in their smart TV, and others yet might prefer a unified messaging app, like BB Hub.
The distinction between what happens locally or in the cloud is often unclear, and it's not getting any better. Chrome on Android wants to accelerate mobile connections by routing them through a compressing proxy. I can get an extra-secure version of chrome from authentic8 to protect against malware, with the caveat that it runs in their datacenter.
I feel that the tech industry in general, and Facebook in particular are struggling to tell users what happens with their data. Sometimes it's because things actually are complicated, and sometimes just to hide obvious overreach. The obvious blowback: complaints, strict regulation and mistrust. As the people who build and run systems, we should strive to do better. Regain the trust lost by past mistakes, and get back to the point where one could realistically apply hanlon's razor to reports of user surveillance.
On one hand, Facebook is clearly correct: If FB makes an API, and a user gives an application (written by a third party and run on a fourth party's device) their username and password, then FB cannot be blamed for the application using the username, password, and API to retrieve private data. Indeed, that's the point.
On the other hand, appearances make it look like Facebook is hiding some things: why is this not a public API? What trust are you putting in these third parties, what are you giving them that not everyone would be trusted with?
But most of all, people are waking up to the vulnerability of their private data. They are realizing that some things they've been taking for granted for years are dangerously insecure. So we have users, such as reporters, suddenly realizing that their device has access to all the data you view on it. Any third party app you give your FB uname/pwd to has access to everything on your Facebook, and the only limitation is whatever their terms of service are. (So does any software that app runs on top of.) Coming to this realization, we see backlash not always correctly directed. It would make at least as much sense to call out those third parties rather than FB, and ask them to prove they do nothing nefarious with this trust.
Is it too optimistic to hope this will stir mainstream interest in free and open source software?
I'd prefer to use Signal and my family uses it to communicate but it's definitely not as nice as Messenger.
This is what they do on mobile:
* Lie to you about how many messages you have
* Ask you to install an app to see those messages
* Upon installing the app, give them permission to mine your data
Two people you don't know made a deal about how to use your information without telling you. There is no reason to think they are going to keep that deal, no reason to think anyone is actually checking up on your information, and no reason to think either of them cares... and no way to know if anyone actually keeps the deal... and probabbly no recourse even if you did know someone broke the deal.
1) If people don't want to read the fine print, whose fault is that? 2) How have we gotten to a point where we are abdicating our choice voluntary, and then acting begrudgingly toward the new owners when they misuse it?
I must apologize for my cynicism here, but we've been going around this mountain for a very long time now (circa 2013 IINM?). I'm getting tired of hearing how people are feeling violated due to their own actions.
Does a Facebook-approved "mobile experience" count as an official FB app or a third-party app? It seems to me that the FB post is trying to frame it as the former, and everyone who's upset is trying to frame it as the latter.
Is that what this entire disagreement is about? Because if it is, maybe it would help if we just focused on that question.
How does Facebook approving of some obvious third party make the third party not a third party? Approval and third party status are orthogonal.
Their PR problems aren't rooted in SV, yet that's who this is targeted to. It doesn't make (good brand) sense.
Silicon Valley is the last place I'm consistently hearing full-throated defenses of Facebook. It makes sense to keep one's base in order.
It's the only one that could matter to them. Congress flopped when Zuckerberg testified. They are clearly no present threat. And we haven't seen a wave of action from states' attorneys general. We have no evidence users are decamping. And by extension, the advertisers are staying.
The only weak point is in (a) recruitment and (b) political support from the tech community. The first can be solved with money. Fortunately, Facebook has pots of that stuff. The second relies on keeping the armies of defenders, who call every Congressional office on their own accord on a strikingly-regular basis, working.
For an example of what happens when one loses their base, look at Uber. It went from teflon to pariah virtually overnight.
Last I saw (on HN) was teens usage is down.
As for Congress...do you trust them not to loop around again?
The fact that FB believes SV is all they have to focus on is what created this mess in the first place, yes? Nuff said.
Teen usage has been dropping for a while, though, with corresponding rises in Twitter, Instagram, and Snapchat. Are there numbers saying teen usage reacted to these stories at all?
I could buy "Congress" or "the European Parliament", but outside of those answers I don't see it. Average users don't appear to be rejecting Facebook to a meaningful degree, no matter how much news outlets beat the privacy drum. But the Valley is small enough that FB could get stuck paying extra for engineers or even losing domain experts.
More broadly speaking: consumer boycotts don't work (directly), but supplier boycotts sometimes do. That includes labor.
To do so, these companies sent the same sequence of bytes from your mobile phone to the Facebook server, as the Facebook app does, or as any person can do. I can write my own Facebook app today, and there is nothing that Facebook can do about it, except sue me.
THAT IS A GOOD THING. LITERALLY FIVE MINUTES AGO THE COMMUNITY WAS FIGHTING IN THE COURTS FOR PACKAGES SENT OVER THE INTERNET TO NOT BE CRIMINALIZED.
Remember the whole thing about violating the terms of services of a company which forbids scraping making you a criminal hacker?
The only thing Facebook said to Apple is: Let's make a deal, we will not sue you, you put our logo into your phone, also we promise not to break your app.
No data was given to anyone! This is literally my iPhone/Samsung/Blackberry running an app that gives ME access to MY Facebook data.
It doesn't even go to Blackberry's server! The nerve of people to pretend as if the data ON MY PHONE is someone in the hands of a third party, as if my phone really belongs to the manufacturer. Again, we used to be fighting for the idea that these devices should belong be unlocked, should be under our control. Now you guys pretend that data my phone downloads from Facebook is somehow a violation because I decided to use an app that someone else wrote.
There is no possible universe in which that is bad. Think about the ramifications of these new ethics that people suggest here.
However, as you say, it is not clear at all. If that is the problem, the New York Times should write an article about that, not 5000 words of all kinds of insinuations, with the goal, it would seem, to generate the maximum amount of confusion, least amount of education, and thus the most amount of outrage.
Like you send the same sequence of bytes that they do, but the difference is that it should be different depending on who is sending the bytes, no?
In other words, it worked exactly like the Facebook app written by Facebook itself.
Hasn't FB had a history of assuming the decision was positive if the user didn't opt out, through a difficult procedure?
Facebook lies every possible time, it’s built into their perverse business model. Even if the article was actually wrong, it’s only fair that they get a taste of their own medicine every once in a while.
Bunch of nonsense. Of course there were.
Facebooks best interest, not 'common interest'
That is a big door and a pretty open use case. I am not buying into "and this is actually good for the users" story.
To me, FB must clearly choose how to handle this: (A) "we made a mistake, sorry; we will fix it" or (B) "this is working as designed; if you do not like it, go away". They could probably justify either case both internally and externally (better ethics vs better revenue), but trying to stand in the middle as they have often done in the past will likely backfire. Buy more popcorn. My 2c.
Spoken like the true Anti-Privacy Overlords they are. :(
> These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences.
That sentence does not make any sense. Signed agreements do not prevent your information from being used in other ways. That's insane, literally.
> Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends.
In the past, Facebook Legal has stated that once a Facebook user has signed up, they consent to having psychological experiments performed on them with no further notice or direct consent. Facebook has acted on this and intentionally made hundreds of thousands of people fall into a depression, just to see if they could, and then they bragged about it. The sentence quoted above actually means "friends’ information, like photos, was only accessible [whenever and however we wanted to]" as Facebook considers those people to have already "made a decision to share their information with those friends" when they signed up.
No, they did not. They adjusted the proportion of positive emotional expressions and negative emotional expressions in users' news feeds in order to test for an association between that proportion and emotional expressions in users' posts. There are serious problems with what they did, but "intentionally made hundreds of thousands of people fall into a depression" is late-Slashdot style trolling.
One does not grasp the true harm caused by unethical experiments by reading only the documented (published or non-published) scientific results. Two extreme examples which I am by no means comparing to that of Facebook, but rather as canonical examples of unethical science, include the Tuskegee experiments and Nazi experimentation. Reading only the published results of these experiments does not convey the horrific harm that actually occurred. The same principle applies to referring towards Facebook's scientific data as an argument that these experiements were not unethical. The scientific results only serve to shroud and conceal the actual activity which occurred in the production of these results.
(edited for clarity)
Concrete example: I am in the process of trying to hire people right now. I'm not sure what the exact best way to do that is, so I vary how I interview over time and see how well it seems to work. I record what happens in a spreadsheet, and then later on I look at the 'data' to make some judgments.
Am I experimenting on my interviewees? Am I potentially harming them?
Well certainly if I ask question A in one interview, and question A' in another, and question A' is harder and I don't pass that person, then one might be tempted to argue that maybe that person would've done better on question A; therefore, I've harmed someone by 'experimenting' and making it harder for them to get a job. Rejecting someone can certainly make them depressed.
In principle this is no different from A/B testing an interview process, using real live humans no less.
So do I personally have an ethical obligation to disclose to everyone I interview that I am conducting human experimentation on them? Or is the only difference in power and scale—I am just one person, but Facebook can A/B test on millions of people at once?
> In principle this is no different from A/B testing an interview process, using real live humans no less.
> So do I personally have an ethical obligation to disclose to everyone I interview that I am conducting human experimentation on them? Or is the only difference in power and scale—I am just one person, but Facebook can A/B test on millions of people at once?
Generally, when interviewing, we ask the same questions of each applicant. We adjust the questions from A to A' as positions are hired, and pools are assembled, but for a specific job, each candidate gets the same questions.
It would have to be different for open recruitment not tied to a specific job, but I haven't had to do that.
Four years ago, having talked to some of the same lawyers about the same topic, none of us had any thought that it was a grey area. An a priori interpretation of the law was pretty clear: a more liberal position was (and is) perfectly legal. But the most anxious minds tend to prevail in these matters.
That's a terrific idea.
Of course, which is why the conversation continues. If it was settled at the beginning without nuance, it would be defined in black and white.
I'm not sure how your reply relates, though, since I was merely imploring GP to demonstrate any awareness of the state of research rather than jumping in with a lazy question-comment.
Elsewhere this is expressed by the slogan, "I'm 12 years old and what is this?"
FB - A little negative = meh
FB - More negative = major downer
FB - Totally negative = woah, look at the sorry state they are in now.
Cake - A little laxative = mmm this cake is special somehow
Cake - More laxative = nice cake, special, where is the roll of extra TP?
Cake - Dump it in = people home for days, the shits.
In both cases, not telling them is the ethics problem.
Basically the public trust boils down to people expecting others not to harm them. Depression can become chronic. The shits could damage someone requiring medical help.
Both are clear risks people would very likely avoid, if they knew.
FB - We want to run a depression test. Plz volunteer your feed and see if you get depressed.
Cake - We have a laxative that tastes great, but you might get the shits, plz have some cake.
See the problem?
E.g.: "Starting next week, we'll try to affect your mood as part of an experiment". Or "the new version of our product purposefully tries to depress you a bit" followed by "we didn't like the results of it so we've changed it back".
And, where did the Nazis publish, again?
We certainly need people to responsibly interpret peer-reviewed literature for lay audiences. Responsibly.
Your citations were published ~50 years after the Tuskegee exeripments. A proper analogy would be to compare your citations to the ethical condemntations of Facebook made 50 years from now.
>And, where did the Nazis publish, again?
The Nazis published in all of the front-and-center, mainstream publications after they were brought to the United States under Operation Paperclip (among other operations).
When the findings are published has no bearing on the ethics of the experiments. Ironically, the half century lead time does not impinge on the impact, as much as the chronology strengthens the impact.
I can't answer that as it is generally agreed that this data should not be readily available, however Nazi experimentation was cited for decades, and much has been written over the ethics of this: http://www.jewishvirtuallibrary.org/the-ethics-of-using-medi...
Since it is likely unavoidable that referring to Nazis will lead to accusations of Godwining, I have edited my above post to hopefully make this distinction extremely clear.
If you refuse to even discuss the nazis and what they did, especially in directly applicable cases like this, how the hell can we ever hope to not repeat their mistakes.
Off topic but that ^^^ is exactly why, after 20 years, I finally gave up on Slashdot and then stumbled across HN and couldn't be happier. I have, in recent times, gone back for a visit just to see what it's like and... realised my decision to leave Slashdot was made probably 10 years later than it should have been.
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'.”
Breach of contract opens you up to civil liability. Having this agreement creates disincentives that wouldn't otherwise exist.
"Prevent" doesn't mean to make something physically impossible or remove the ability to choose. Police can prevent drunken driving by announcing they will have DUI checkpoints on New Year's Eve. It doesn't make it physically impossible to drink and drive, but if people choose not to because they want to avoid the consequences, some drunk driving has been prevented.
How much restitution am I entitled to for an ambiguous violation of an amorphous concept like privacy? And how much is Facebook entitled to from a partner who (might have) violated that agreement, causing non-measurable harm to me?
This is getting a little conspiratorial: contracts are very real, and though I didn't see ours and was not privy to any special user data we might have received, we are very careful about that kind of stuff and liability is a huge, huge deal.
A contract stipulating that data has to be protected in a certain manner is a reasonable protection, depending on the sensitivity of the data.
It's very strange that you'd throw your hands up and say, 'well FACEBOOK didn't do it so you're out of luck.' Facebook did do it by not regulating there third party agreements.
Well, it’s only prevention if it succeeds.... otherwise it’s just a failed attempt.
The real insanity is with the users. FB did the most obvious thing. The user-base somehow magically thought they were providing all those free services because they were nice guys.
It was obvious to anyone since the beginning that FB was a clearinghouse for private data trading. How else could the model remotely work?
I tire of this argument that users should obviously know they are trading their personal information in return for access to a service. They don't know this. It's obvious in a conversation with any (even lightly) tech-illiterate person that nothing about how the modern internet economy works is obvious.
Think back to the emergence of the web as a truly popular medium. There was no Google Analytics, no FB tracking buttons that follow you around on every web site you visit (that one is particularly egregious - FB users are tracked even when they aren't on facebook.com, and we expect users to just know this?), just advertisers buying a banner ad slot from the owner of a web site. Back then social networking was, what, AIM? That was free and they didn't harvest user info for it. The change has been gradual, and the idea that users should have kept up with every development that led us to where we are today is preposterous.
When this has happened in the past the answer has been clear: knowledgable people come together to pass laws that benefit individual citizens who have neither the knowledge nor time to learn.
I consider myself relatively tech savvy, but when - thanks to the GDPR - that hive of scum and tracking that underpins the modern internet was revealed I was really shocked.
Clicking one of those "manage your cookies" links is a truly enlightening experience.
How should Average Q User even begin to grasp the implications of this fucking datasucking hydra?
I was glad having "deleted" my Facebook account some 4 years ago.
Man, I didn't no 10% of the shit that's really going on.
Being party to conversations with C-level sociopaths is what really brings it home.
One example: One employer railed at the injustice (loss of profit) of not being able to share the millions of patient's medical records we'd accumulated with Big Pharma's marketing machine. This employer spent considerable effort trying to figure out how to work around the law (eg plausibly deniable anonymization).
FB may have started out as an invite only thing for technically savvy(ish) college kids, but those days were completely over the minute they got their app so tightly integrated with iOS and Android. So ... circa 2010 maybe?
FBs target audience now is literally the same as or even wider than television. We don't expect granny to understand the economics of her cable box.
She's used to paying the cable bill, seeing a few ads and feeling free to use whatever is on offer. Because she paid for it.
For instance the guide channel is not "free" from her perspective, and she has no expectation that she should need to be suspicious that for insance, the guide channel is tracking her viewing habits and customizing the programming on offer so as to massively sway public opinion and elections.
Average US consumers are used to this model, whether or not the fine print spells out something else. They're going to think "I paid my cell phone bill, therefore I am paying for facebook" ... the question never even arises.
> They're going to think "I paid my cell phone bill, therefore I am paying for facebook" ... the question never even arises.
Your example switches from "I pay <somebody> that enables access to <somebody else>" and you don't even have to go that far.
The guide channel is not tracking her viewing habits, but her cable box certainly is. The industry term is Addressable Media, and if she has any modern cable box from any major provider then she's being tracked and targeted for targeted ads just as highly as Facebook would be, despite her paying her cable company for the privilege of watching it. And while it's not customizing the programming itself that she watches, it is customizing the ads she's seeing. It's effectively the same as Facebook, where the she's using whatever is on offer and being subtly influenced by highly targeted excerpts being interspersed into it.
Ever experience weird glitches for commercials that start and end almost immediately, or randomly lack sound or any number of weird quirks that would make you think "man, someone just spent a lot of money for a messed up commercial spot, how did that get passed QA/QC?!"... chances are that was just a hiccup for you specifically as your cable box dynamically inserted an ad, and not hardcoded as part of the wider broadcast.
Cell phones are the same. Even though you're paying the cell company for network access, they're also double dipping by selling targeted audience capabilities based on their data of you based on ads that are seen while using their network.
In fact, so much so that it's been hard to get people to realize that the recent issues haven't been related to ads at all. Facebook itself stills seems to be trying to blur the lines between data collection and advertisement.
To me there's nothing wrong with ads. Even content-targeted ads (if I'm reading about gardening then show me an ad for potting soil or gloves). But the push towards creating profiles that follow you around to target ads to you on multiple platforms based on your previous behavior is just creepy and dangerous (when political ads and control over the data comes into play).
That’s what genuinely baffles me. If I were a manager of a gardening company and I wanted to do targeted advertising I would just buy space in gardening themed publications. The idea of showing people my ads while they’re on other websites and not in a gardening mood anyway makes literally no sense. All this tracking and profiling ads zero value for the ad buyers, so why are they paying for it?
You're extrapolating based on your own actions rather than measuring the result. The results say that if you show gardening gloves to enough people at random, you'll eventually get some sales. Everything else is just narrowing down "random" a little bit.
Brand awareness/clout. A reminder that you need or want that thing.
And even if I like something seeing it repeatedly online just annoys me more than it inspires me to buy their stuff. I think they'd get more bang for their buck by having social media personalities that align with their target market push their product (think: Nike).
If people are likely to be consumers of those things then they'll probably digest media related to those things or other media in the same demographic. This is classic marketing and it doesn't require the level of privacy abuse that Facebook has been trying to justify.
Perhaps someone was also mining the AIM data, but that would never have occurred to me to do. I doubt the tools to analyze and monetize it existed yet in any case. I'm not even sure the tools existed to monetize a social network when Facebook started.
It's a kind of weird conceptual leap to be honest, when compared to thinking about just selling ad space or perhaps targeting it based on people's interests.
Realistically, I think we have three choices:
- We can't have nice things.
- Rampant exploitation of individuals, due to vast asymmetry of information.
- Regulation, with its costs and inefficiencies.
But I think there's very little precedent of real accountability deriving from collective consumer action, even in cases of overt abuse (think Wells Fargo).
But in a society where citizens are represented by elected officials, regulations are "citizen" action. Regulations should not be viewed as any less legitimate than consumer action, they are just enacted through a proxy mechanism, aka elected officials.
This whole bifurcation of regulation vs consumer boycott, and the subsequent push to delegitimize regulation seems pretty artificial, and more importantly, a huge benefit to people looking to avoid any sort of boundaries on their actions.
a) either understand what you are clicking on or
b) just refuse to click on it
what is unrealistic is to expect society to babysit you every single time a moderately complex choice is presented to you.
This is a tragedy of the commons that government regulation has been the best solution for so far
Do you read the entirety of every twenty page EULA before using a new app?
In the case of legal documents -
1) the base scenario itself is terrible - attempts to make credit card terms easier to read have resulted in a huge increase in the amount of text required to read it.
2) leaving the base case aside - the moment a company or individual decides that they can get away with preying on customers, sensible options no longer work- no normal person is going to beat the legal team.
Your position is a theoretically sound position but it does not survive contact with very common real world scenarios.
So that point may need to be re-thought.
It's both too early and hard to say that for sure. I would suspect it's had an impact on FB's long term trend line, especially with younger people.
Anecdotally, much of my social network has significantly decreased their Facebook use if status updates are anything to go by. They may be on the site as much as before, but they're not really engaging with it as actively as they used to.
Granted, that could just be a normal fall-off since the 2016 Presidential Election season was a "special" time. But I've noticed even with my cousins abroad they've mostly transitioned fully to WhatsApp. Sure, that is also a Facebook joint, but it sends a real signal to Facebook as to what the market values in a social media platform, and it seems it's not the Facebook model.
I've managed to get my friends and family over to other channel so we are now almost 100% Facebook free except for a few log in with Facebook and the occasional Instagram (also declining I think).
(Disclaimer: I trust WhatsApps crypto since a number of cryptographers have audited it but I still do not want them to have my metadata. I mean, seriously: the crypto can be unbreakable but why do we think Facebook bought WhatsApp and made it free?)
The big impediment left has been Facebook's function as a social space to share photos with all and sundry. If not for it being the place to post photos of kids, weddings, vacations, etc. I don't think people would be spending much time on it. Seeing pictures of their nephews, nieces, kids, and grandkids on there is definitely what keeps the old people plugged in.
There really isn't another service that fills the niche either. Instagram is geared towards individual photos rather than albums or chronicles of events. Flickr kind of did, but it's basically dead now and its narrow focus on photography alone wound up gearing it towards pro or hobbyist photographers and didn't get much buy in from everyday users.
Facebook also seems to be something of a default space to host a bulletin board or community forum. Next-door has been trying to muscle in on that territory but it's been plagued by problems with racist usage patterns. Those would probably go away if it was more common, but it's created some real issues with optics. Also, I have no idea what their security practices are like so who knows if it's an improvement.
This is just hindsight talking. It was hardly obvious from the beginning because it was hardly obvious how big the market for granular private data was going to be.
Facebook, in the beginning, was functionally just a stripped down personal page with a status update feature analogous to AIM. It didn't really ask for any information that you wouldn't have gleaned from a 10 minute conversation with a person and almost everything it got about you had to be volunteered (e.g. favorite music, movies, etc.)
The News Feed didn't get introduced until a few years in and it prompted massive outcry from Facebook users for how invasive it seemed, but even then most people assumed the problem was going to be that it violated some implicit social consensus where people should have to go looking for information about you, not have it sent to them in an notification blast. In other words: "stalking should be hard."
The ad-supported business model at the time didn't rely so heavily on micro-targeting by "revealed preferences," it was assumed they would target based on the declared preferences that you gave them (e.g. favorite movies, music, etc.) The idea that Facebook (or even data analytics as a field) would eventually become sophisticated enough to devour the news media in general and tailor you a bespoke reality based on your implied tendencies and personal weaknesses to manipulate you was, at best, fodder for some speculative cyberpunk fiction in the early days. In fact, Facebook didn't really even make forays into being a news clearinghouse/media aggregator until the mid 2000s when "going viral" became the hot trend in media, which is what ultimately led to the media world putting themselves over a barrel to social media companies by myopically chasing traffic in lieu of building an audience.
By the time I deleted my facebook, no one even saw my goodbye post on their "feed." It was a different internet than the one on which I naively signed up. I, and a host of others, enabled that new internet. I have much less interest in this one.
The day FB was announced was the day I said “this is a bad idea.”
I and many others on HN have NEVER made a FB account, and my life seems to generally have been better for it.
But it was obvious then, and it is obvious now.
Matter of fact it frankly looks even worse today, since there really seems to be no solution, and the granularity of tools and regulation is too coarse to deal with this scenario.
The only real option I’ve seen seems to be to drop off the internet.
The "day FB was announced" it was exclusive to Harvard students and was just a cleaner version of MySpace and Friendster. It's unlikely you would have had strong opinions about Facebook, in particular, that you didn't also extend to those two, as well as AOL and sites like Digg.
And people clearly saw and pointed out the issues with privacy back then.
I had few issues with Aol, but it was still a simple service play - and had/has little at all in common with Facebook.
DIGG was never at the same scale or range - and from what I know it never depended on your real life profile as I recall.
Not pure semantics. It is important to focus on what you're actually talking about when you say "the beginning." Facebook has evolved over time, both in its service as well as its business model. So when you talk about "the beginning" it's important to know the beginning of what?
>DIGG was never at the same scale or range - and from what I know it never depended on your real life profile as I recall.
Nothing was ever at the same scale or range as Facebook. If they were, they'd have been as much of a threat as Facebook is now. That's kind of the point. A lot of the LiveJournal, Xanga, MySpace, etc. stuff from back then was all groping towards a functioning business model and it was Facebook's news feed that actually created one.
But even in the early stages the News Feed wasn't really about extracting data, just about creating a UI paradigm that made it easy to shove native advertising in your face. What really made the big data analytics game take off was when SEO and viral marketing started getting big, which wasn't really on Facebook's radar in the early days.
Advertising is different from Ad Tech. Ad Tech is about more than just Facebook's collection. People couldn't really have predicted the extent to which the media industry (new and old) was going to go whole hog into Facebook's platform. It has been their complicity that elevates Facebook's data collection from being "Wow The Facebook is nosy and annoying" to "Wow Facebook is a threat to the public sphere."
It was obviously privacy invasive and the harm type any of those systems could do was obvious. They didn’t have a business model when they started out - but that’s only because they had the runway to ignore it. Otherwise there was an obvious way they were going to make money and that was Ads. Powered by data harvested about you.
I mean that’s the whole point of the site? What else is it going to do?
This is facebooks business model - it’s also the business model for many other sites. And yes those other sites are also a problem.
Hey, I’m not stating an opinion, I’m stating a fact.
I don’t have a Facebook account and always viewed it as a threat to privacy. I’m not the only one.
All that’s happening now is one of many adversarial uses of this information is being made known to people who Don’t visit slashdot and hacker news.
Facebook was going to use your personal information to make money for itself. It was going to be harmful to you.
This has happened.
The business model for other parts of the web is the same. Those parts are going to cause similar issues but at smaller scales.
Google has the same model, but takes data privacy more seriously than Facebook does. It's disingenuous to conflate any data harvesting with the most unscrupulous and casino-inspired iterations of it.
Arguably, the micro-targeting model of advertising Facebook goes after is more faddish than effective anyway. Google uses the data analytics as much to build more salable products as it does to serve advertising, so the volume of collection serves some kind of function. But facebook's actual business need for most of its data is dubious at best.
>Facebook was going to use your personal information to make money for itself. It was going to be harmful to you.
It's not really the use of personal information that's the harm though. It's the addiction mechanisms they leverage to make you keep giving them personal information and the lack of protections or responsibility they put around it. It's not at all "obvious" that it's going to be harmful to anyone, and arguably it's not even harmful to anyone individually so much as harmful to society and the body politic generally. You can't have those problems at "smaller scales" because those problems don't exist at small scales, they're an emergent property of scale itself.
Further google does far too much as it is. I don’t want my emails parsed to figure out what ads to present, or my uploaded files.
The addiction mechanisms are a separate class of harm, and I recognize them from when games, far more than social media. They came much later, and were not part of business models back when Facebook etc. were created.
The misuse of personal data to harm people and privacy is a known idea. For example, they were warned about back in the day by people such as Huxley or Orwell.
Either way, this is an odd conversation.
These harms were clear and known.
If you are saying You didn’t see it, then that’s fine many others didn’t see it or believe it.
If you are saying I couldn’t see it, well I have long ago acted on what I saw and believed and oppose Facebook and other similarly privacy invading systems. I’m not happy with the old British surveillance state from the 90s - and I’m neither British nor American.
If you are saying the specific detailed break down of the harm to be caused was not known- sure. I cannot tell you which incident will finally trigger X event but I can predict that you won’t have autonomous cars, (since I see people driving in the wrong lane in the opposite direction of traffic very often where I live)
And a final point - the addiction mechanisms are just being cross pollinated from other systems, just like AB testing and other research helps ensure people stay on web pages.
Your deeper problem is that advertising, once a tool, has now become and end in and of itself.
If something's too good to be true it's practically always neither good nor true.
The problem is that via apps you get the user info directly. And their friends' info too. That's how Tinder can show how many mutual friends you and the profile you're looking at have. That sounds nice, but as the data left FB's servers, there's no way that users can control what happens with it.
Those ads are targetable by categories determined from private data, but Facebook doesn't give anyone else the data, that's literally why people would purchase ads through their exchange and not through another exchange.
Google has the same model, they don't sell your data, they use your data to match you with advertisers through a fairly opaque interface that lets advertisers reach the categories of people they want to reach, without revealing the data to allowed them to put someone in that category.
Kind of like how your Mom will set you up on a blind date based on what she knows about what both parties want, but doesn't disclose all your information because she doesn't want anyone to be upset with her.
That's interesting. Do you have a link to source?
However, the claim that they made "hundreds of thousands of people fall into a depression" is pretty exaggerated - FB demonstrated it could influence the mood of users via changes to the algorithm, but the overall effect size was small (Cohen's d = 0.001).
I am very critical of FB and this type of research, but don't know if anyone became depressed over the experiment. It's possible, but wasn't demonstrated.
She was being dry witted about tatting stuff, not recommending human testing.
I never gave my consent to McDonald's for putting up a billboard or changing their menu or their recipe.
Yes but just two sentences later in the post...
> And our partnership and engineering teams approved the Facebook experiences these companies built.
They're saying they inspected these apps. You jumped to "literally insane" so fast there.
Here's the situation. The user has purchased a device (such as a smart phone) from a device manufacturer (such as Apple). The device runs an operating system (iOS) and a whole boatload of software, including telecommunications equipment necessary to operate cellular radios, the higher level OS, then user-installed applications and such.
If you don't trust the device manufacturer, then that's the ball game. If you assume that the device manufacturer is untrustworthy, then you should assume that it can steal the user's data just as easily from a web browser (when the user visits facebook.com) as it can from a native application.
When the user decides to sign into Facebook (or any website or application) from their device - the manufacturer-supplied device - the user is trusting the manufacturer with their data. Whether the application is delivered through a web browser, or is a so-called 'native' app installed through an app store, doesn't substantially change the trust relationship between Facebook and the manufacturer. If you believe the manufacturer is doing nefarious things, then you should assume the manufacturer will install a spyware web browser that siphons data out of every web page that you visit. Partnering with the device manufacturer to allow them to distribute a Facebook-branded mobile app does not substantially grant them more trust than they would have already.
Why is this, though? How is it different from any other legal agreements?
Suppose I tell you a secret. You promise only to share it with others who will keep it a secret. You share my secret with Bob. Bob promises he won't share my secret. Bob shares my secret.
You broke your promise. You tried not to break it. But break it you did.
Then you face the legal consequences for doing so. Thats what prevents people from breaking them in first place. Unless you are arguing that all legal contracts are useless because they can be broken.
I am not asking about the logistics of leaking. Yes ofcourse its physically possible, I am asking why the consequences of breaking a contract don't matter in this specific case.
Technically speaking, no contract was broken. When users signed up for Facebook, they clicked a button pursuant to which Facebook indemnified itself from everything under the sun. (It is unclear how enforceable those terms are. EULAs, for example, aren't very enforceable in the U.S. .)
Facebook did agree not to do these things with the FTC. (It also made some noises to the Congress, but I don't believe those were under oath.) And there are a lot of things it may have done which are illegal. But enforcing the law and consent decrees requires prosecutors to prosecute. We're waiting for that.
I am referring to this
> These partners signed agreements
The difference would be ignored if it hadn't been replicated in the Cambridge Analytica scandal. People are reasonably concerned about how much data Facebook is giving these third parties and how diligently it's ensuring they aren't retaining it.
These concerns are heightened by the existence of these agreements contradicting with (a) the FTC consent decree and (b) Zuckerberg's statement to Congress. It may also be material in the ongoing biometrics lawsuit in Illinois.
That sure sounds like a misrepresentation of what happened here. From the article: "These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences."
I don't think I would represent facebook as "paying" google with user's data just because Chrome has access to user data when a user visits the facebook website.
Contracts motivate you to adhere to some previously-agreed-upon behavior, and the degree to which they motivate you depends on how much the penalty actually penalizes you. In the case of large corporations, seemingly big penalties are often actually negligible in comparison, with the resulting motivation to adhere to the contract being negligible as well. That is one important reason why it is deliberately misleading to use the word "preventing" in the context in which Facebook uses it here. The other one is simply that it's factually wrong, even regardless of the size of the penalty.
How do you show any actual damages?
Data watermarks, small changes (little tiny bits) that allows to track the leaks.
Also audit logs, you know who has what, etc. The topic has been hotly discussed with GDPR as the leaks have rather harsh consequences (beside being dragged through the mud, erm newspapers).
All that presumably is figured out before signing a agreement. Thats the whole point of an agreement vs pinky promise.
The burden is on the plaintiff to prove FB violated the agreement, that violation caused damage, and the value of that damage. The EULA doesn't say the user gets a pay out if FB admits (or is obviously caught) non-compliance with the agreement.
Do you think that Facebook's data is more important than the information banks/health providers share and should not be shared even if proper contracts for its use are in place?
Companies share sensitive information all the time, this is not a new thing, contracts are real things, breach of contracts is a big deal, consequences can be huge.
Both FB and most of the entities with whom they would have shared data are huge, and there'd be way to much risk in not being smart with the data.
So whats the point of any legal agreement then?
Consider also: traffic laws. The punishments make most people obey them. There are still drunks, malicious drivers, just plain bad drivers, and people doing U-turns on the highway when they think no cops are around.
If you want an assurance that you won’t be hit by one of these, you have to just avoid traffic altogether.
If you give someone else an assurance that e.g. their child will never be hit by a car, then you have to never take said child on the road.
Facebook gave an assurance that our data would never cross paths with a bad actor. The only way to do that is to never take the data to where the bad actors might be.
Don't downvote for asking me this, asking in good faith. Where did they give that assurance?
> "Signed agreements do not prevent your information from being used in other ways."
But I guess you now understand why contracts do not prevent it.
I was asking why consequences of breach of agreement is not a strong enough deterrent in this case .
What is your definition of "prevent", like physically stop from doing it?
Once you understand the distinction there, you will understand the answer to your question.
That doesn't help the users much, though.
Your tone is as if to imply that nobody but the company benefits from making 'huge sums of money'. There are employees, vendors (selling goods and services), stockholders (retirees?), governments (taxes) and so on. Why the idea that earning money is somehow not the purpose of a company? It is at the core.
Metaphor is literally dead.
> they consent to having psychological experiments performed on them with no further notice
Sounds like the third parties delivered!
The difference between Facebook and a casino is more about the physical limitations of their storefronts than business models.
They can make regulations on social networks that are infeasible to comply with for any company not making 10s of billions of dollars a year, effectively cementing facebook as the only legal social network. Whether you consider that a harm or not is up to you.
I complain about GDPR because it buttressed Facebook's position .
Not Facebook scale, apparently.