Hacker News new | past | comments | ask | show | jobs | submit login
Facebook has been paying people to install a “Research” VPN (techcrunch.com)
1454 points by randomacct3847 on Jan 29, 2019 | hide | past | favorite | 445 comments

Some of the highlights from Will Strafach, who did the actual app research for TechCrunch:

#1 "they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI."

#2 "the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like."

My editorializing - I have been suspicious of Facebook getting the "submarine" treatment (1) but the insane scuminess of #1 above, which essentially is a big fuck you to Apple, pretty well supports the recent view that FB will essentially break any rule that serves to further their own ends.

via https://twitter.com/chronic/status/1090394419902197761

(1) http://www.paulgraham.com/submarine.html

Side note: This wouldn't get around cert-pinning? Even if a new trusted CA is installed on the system, an app implementing cert pinning still wouldn't trust this new CA. Seems that could be a wise move for Facebook's rivals that want to limit snooping.

Edit: on 2nd thought even if Facebook can't decrypt a particular app's traffic, just knowing how many requests it makes, how large they are, and how often, could still provide some useful insights into an app's usage.

If the app is hard coded it shouldn't trust another cert. Note though that browsers, like Chrome, ignore cert pinning if the cert chains up to a locally trusted CA. So the answer is more "It Depends".

> Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.


EDIT: Spaced on the fact this is a phone app. While Chrome on Windows ignores certificate pins, I'm unsure if this also applies to Android / iOS root stores as well.

So what DOES pinning protect against? Certs generated by state actors with access to CA’s?

That, and (more commonly) CAs mis-issuing certificates to malicious actors due to bugs or weak internal controls.

You can enforce certificate pinning in your own native app. You can even go as far as not trusting the hookable (on a JBen device) system libraries and link in your own OpenSSL or something similar.

I was under the impression Chrome was dropping HPKP support (it was deprecated in 2017), so I imagine this would be across all products - desktop and mobile.

Chrome is dropping support for dynamic HPKP pins (pins that sites can add via headers) but not for static pins (Google domains etc.)

Source: https://bugs.chromium.org/p/chromium/issues/detail?id=779166

> OK, we're looking at removing dynamic PKP in M69. Static PKP will remain until further notice (we have no active plans to remove it right now).

AFAIK, yes. You would have to modify the application itself to disable certificate pinning.

you have root. Might be inconvenient but hardly impossible to modify files to drop your own pinned cert in other binaries.

As I've mentioned elsewhere in other threads, this app doesn't actually obtain root access (that is, it's not a jailbreak); the article/title is confusing a "root certificate" with "root access".

I know Uber and a few other apps certificate pin, since running a mitm proxy myself made Uber non-functional.

You can fix that with https://www.frida.re/

Not without access to a jailbroken device.

Is it possible to set this up to work like charles proxy?

Yeah, the idea is to remove pinning with frida and then your proxy will see all the traffic. There's some sample scripts here, and if you google around you'll find one for iOS: https://codeshare.frida.re/

Jailbreak required, of course.

You are correct!

Many apps do not do pinning currently.

Installing a root certificate is a major breach of privacy. This is what Fiddler does on windows when you need development access to a machine and want to view data being transferred between all websites including https.

By using a VPN they forced all traffic to go through their servers, and with the root certificate, they are able to monitor and gather data from every single app and website users visit/use. Which would include medical apps, chat apps, Maps/gps apps and even core operating system apps. So for users using Facebook's VPN they are effectively able to mine data which actually belongs to other apps/websites.

Lets not forget they can modify the content as well!

Imagine the malicious things someone could do with this level of access. And none of the usual mechanisms to, say, detect widespread compromise of a Certificate Authority would apply here.

They could drain your bank account, hiding the transactions and adjusting the balance whenever you viewed the mobile banking website or used your mobile banking app and adjusting any emailed statements.

They could send messages to your friends and family from your account asking them to send money to you in a certain way or donate money to a "charity", hiding the entire conversation from your view.

They could make some services you use slower or less reliable in subtle ways, to steer you towards the ones they want you to use— the ones that are easier for them to manipulate the traffic to/from.

They could make you think you're going insane, in any one of a variety of ways.

They could gather all of your private information, and then lock you out of your entire digital life all at once. Two-factor authentication wouldn't protect you; they could present you with a fake "re-confirm your settings" process to collect the information necessary to disable or replace the settings. (If you pay your rent using your phone, they could lock you out of your physical life too; they could prevent the payment from going through, show you a confirmation, and suppress notifications of unpaid rent and e-mails from your landlord.)

They could control which news you see, slowly shifting your views on things like privacy and security.

If you get suspicious about any of this, they could plant false information in your search results.


- installing root certificate to access encrypted data (potentially of websites? including banking websites?)

- means bypassing security mechanism put in place to prevent people to access data send to a server (from a browser/app)

- means potentially bypassing DRM, too

=> isn't that illegal even if the user agrees to it? (at last in the US where some especially DRM related laws are strange)

No that is absurd, this is standard practice for almost all enterprise setups, as a central component in the deep packet inspection solutions used in network security. So the chance of it being illegal as the user has agreed to participate is absurd. The only thing that would make it so would be to lie about the functionality and intrusiveness.

Remember that in the case of the Facebook VPN, they were asking teens to install it -- which means the teens involved (according to US law) are a protected class: under adult age. Under age folks can't legally agree to participate in anything.

Keep in mind, just because everyone does it doesn't mean squat for is it legal or not. It means it hasn't been tested in the legal system, such that a judgement has been passed regarding legality.

If somehow, the practice were found to be a key part of circumventing some large industry's means of controlling something, it could realistically end up becoming illegal once subject to legal scrutiny.

How would this bypass DRM?

I think the parent is implying that HTTPS encryption is a form of DRM. It's a reasonable question because AFAIK, the DMCA doesn't specifically call out DRM but rather any security/protection measures, so I can see how HTTPS would fall into that.

(However, upthread comments indicate that using certificates to see HTTPS traffic is a fairly common practice in enterprise setups)

On most devices the cert the VPN server uses has to chain up to a trusted root ca.

The article didn't actually mention whether Facebook were MITMing TLS connections by default or whether they'd observed them doing it at all.

Duh ;). FB isn't the only one... So many do this, your head would spin twice and it wouldn't even hurt. People need to see the bigger picture. FB is just one example.

> So many do this

Point out a few? This needs to be common knowledge.

[citation needed]

How is this not in violation of most wiretapping laws? Facebook is not the common carrier in these cases. Both parties of conversations with teens are not consenting to the wiretapping, which is not allowed in many US states. I’m not sure teenage consent is considered “consent” and the parents aren’t a party to the conversations Facebook is wiretapping. Facebook is both paying people and recording the electronic communications... So how the hell is this legal under current laws?

>How is this not in violation of most wiretapping laws?

This is, perhaps, the most apt question to take away from this. If an individual did this, even with an EULA, that would be a fast-tracked way for them to see the inside of a penitentiary in almost any country, yeah?

Unless you can spend tens of millions on lawyers when the benefit of wiretapping would be in the billions.

im sure a disclaimer is in the eula (boo).. but if it were deemed illegal, i suspect fb would just pay the fines, or more likely, be given time to come into compliance with the law..

FB isn’t the service provider, they’re intercepting and recording private conversations they aren’t a party to... where not all the parties are consenting, and sometimes the party is not of age to consent... If it’s criminal, why should they be given “time”?

i agree its not justice, but the answer is because they have power. you could download the vpn and then try to press charges. hope you have lots of money.

A 13-year-old can't make a contract in law, certainly not in the UK. Was the app installed outside the US?

A 19 year old can. If he access private discussion with his 13 year old brother, indirectly the study will gather data of the kid. RGPD don’t care if you gathered data directly or inderictly, you are liable.

It's not quite that cut and dry in the UK[0], and the article mentions that parental consent was required.

But if they were operating in the UK, I'm somewhat doubtful their disclosures as reported in the article could be classed as informed consent under GDPR given the "specific protection" children are provided [1].

[0] http://www.steinfeldlaw.co.uk/uploads/Are%20you%20contractin...

[1] https://ico.org.uk/for-organisations/guide-to-data-protectio...

They can make a contract in the US, but they generally have the right to disaffirm it until 18.

Has this been used in Europe, as well? If yes, can someone affected please excercise their GDPR rights and ask for information and access to personal data?

The fact that this exists makes me uncomfortable, but I'm having trouble pinpointing a reason why it's bad. People are opting into the data collection. Perhaps they don't know the full extent of what Facebook is tracking, but sideloading apps on iOS is not a one-tap process—anyone who used this had a sense of what they were doing.

And, $20 per month is pretty substantial compensation.

The way that Facebook is bypassing Apple's rules feels shady, but I've always felt those rules were user-hostile to begin with. I firmly believe that users should have control over their own devices, and that means letting users give information to companies if they so choose—especially if they're being financially compensated.

Recently, there was an HN thread of a Chinese man who sold his kidney for an iPhone at 17 years of age and 8 years later was bedridden for life because his remaining kidney failed.[1]

You could say the people who bought his kidney for an iPhone did nothing wrong. The kid had control over his own body and they made a deal the kid thought was good.

I think, though, that he wasn't properly educated of the risks that doing such a trade would leave him with, and that the people who offered him the deal very well knew them, but targeted him for being a naive child who wouldn't take them seriously.

I think this is the same case. People just don't understand or don't take the risks of this seriously enough, and companies like Facebook take advantage of that.

[1] https://news.ycombinator.com/item?id=18925780

This is actually a great comparison, because I think both boil down to (a lack of?) informed consent. I think there's nothing inherently wrong with selling all your data for $20 a month. If you know exactly what you're doing.

Very likely the kid didn't understand exactly all the risks and wasn't informed when selling his kidney. And I would guess it's the same thing for a lot of kids signing up for this facebook thingy thinking "free $20 a month? Let's fucking goooooooo!"

But I honestly feel just as creeped out by apple dictating what people can install on their phones and what they can't.

The danger is judging informed consent by what we would find acceptable. It's conceivable that one could understand the risks yet not appreciate the probability involved, or have a value system skewed towards present gratification. The point is that the concept of informed consent in cases like this don't really add much information.

You're allowed to set your value system however you want, that's not my business. My issue is just that there is no details on their server-side security, and that the terms used to describe the effect of the root certificate are not nearly strong enough. But if someone truly is informed about who is collecting this then their's nothing wrong, other than FB violating their agreement with Apple which I trust Apple is seriously chewing them out for.

I think there's nothing inherently wrong with selling all your data for $20 a month. If you know exactly what you're doing.

Informed consent is _impossible_ here because there is no way a person can know what future use the data will be put, and Facebook sure as shit ain't gonna tell, and they're even less likely to limit their future use of the data through an agreement made in the present.

If they don't limit and detail all use of the data, collection would (probably) be illegal under the gdpr.

See, I felt like it was not a great comparison because a body part is a very different thing from web traffic. Since Facebook does not stand to gain anything from inflicting material harm on you, the expected value of giving them your data is at worst very slightly negative. They’re likely not even recording most of it. (What would they do with it?) The same can emphatically not be said of a mission critical singly-redundant organ.

you may not feel immediate effects but at a societal scale it does come back to bite you

They can’t run this program on a societal level because it would cost too much (>100B a year in the US alone, which is surely more than the value they derive from it). So we really only need to be concerned about whether this hurts the individuals involved, and whether Facebook plans to do anything unethical with the data.

You're implying (perhaps unintentionally) that selling an organ is equivalent to selling information on what websites you visit. There's a reason selling organs is illegal, at least in the US.

I don't think personal information is so valuable that we need to outlaw its sale.

I’m led to believe a whole bunch of intellectual heavy weights agree that privacy is so important it needs to be heavily regulated fairly immediately.

I tend to agree with them.

Come to think of it, is there anyone making a strong case for weaker privacy protection? I’m prepared to put aside my existing assumptions long enough to read an article or two.

Noone said they were equal, just that informed consent is important.

Organ selling isn't illegal because organs are valuable.

Impingement on quality of life by the seller of an organ can be, and is translated to a value statement. However, the paradoxical nature of human beings which relies largely on emotional responses dictates that while we can have people starving in the streets, because it would make society squeamish, they can't be allowed to sell their organs, for the sake of their own human dignity and quality of life, never mind that society stripped them of both anyway.

This isn't paradoxical at all, when both options are horrific there is little benefit to cover up one with the other and claim a solution.

Furthermore a short term windfall from selling organs will not provide the skills or assets required to prevent long term starvation and homelessness; so now the horror has been compounded: homeless, starving, and prone to debilitating illness.

The paradoxical nature is that being homeless and starving is horrifying but something humans have become accustomed to, and yet the idea of selling organs is embedded culturally as being shocking, even if the end result of being starving and homeless is relatively equivalent to being desperate enough to sell organs, so it's not allowed.

Also, how you mention as fact that money from selling organs won't prevent long term starvation and homelessness is surprising; you don't know that's the case one way or another but you're trying to pass off an opinion as fact. I can't argue either that it would help with a clear and definite metric, but the correlation of living conditions and money is clear. How real-life application of such legalization or organ selling, with quality of surgery and post-surgery care, legal predatory practices, and other factors are dealt with are potential problems, but those are issues that exist in all commercial domains.

I also never argued organ selling as a solution to a problem; as others mentioned, it is a potential band-aid to a problem that lies in wealth inequalities, but which I find interesting as a societal flashpoint that show how knee-jerk emotional responses can cause logical paradoxes.

Impingement on quality of life by the seller of an organ can be, and is translated to a value statement.

How much is a Jew's life worth vs a Christian's life vs a Muslim's life? Or would it make you squeamish to try and adjudicate that? Or are some things not worth putting a price on because of principles, because breaking those principles would have worse second and third order effects?

Well, the market as an aggregate dictates that. It's entirely reasonable to assume that from market to market, buyers have different value assessments for people of different religions, and place different values on each. On a personal level, I would pass judgment by objective quality of organ, but that's beside the point. The idea of employment already places value statements on people and mainstream moral perceptions mark discrete points that shift from moral to immoral, when in fact its all on a continuous scale and the point at which it shifts is arbitrary.

> equivalent to selling information on what websites you visit.

You make it sound like they're just obtaining a list of URLs, but that's not it. For $20, they get to impersonate you while you're in the VPN and after you leave (they have all your passwords and session cookies). They can also impersonate anyone you deal with. They have all decrypted information going between you and the rest of the internet. Not even your ISP gets that amount.

Even further problematic is the scale at which they can do this. This isn't just a concern we should see as individuals but also as a group. They can control a grand portion of information flow and authentication in the whole web.

> I don't think personal information is so valuable that we need to outlaw its sale.

Europeans seem to disagree. Maybe your "reason" is just cultural bias?

GDPR doen't outlaw the sale of personal data. It just prohibits selling personal data without user consent.

Under GDPR, it should be perfectly legal to compensate a user for agreeing to share personal data with marketing companies.

No, the point is that under GDPR you can't sell personal data because it remains the property of the person it was shared by. You can sell access to that data, but the purchaser inherits the terms under which that data was shared and can't use it for any purposes the owner did not already agree to -- plus that transfer itself has to be consensual.

Also under GDPR the consent can be revoked at any point and the data has to be deleted. Plus the owner has to be given exhaustive information about what data was gathered, what basis it was gathered on and who it was passed on to (recursively).

It comes down to informed consent. But that's a whole can of worms.

One which needs to be opened and we need to sift through to find an acceptable answer. It is difficult to find something that allows freedom of choice and doesn't require a huge amount of knowledge. Plus, who decides what the facts are? The required minimum knowledge? How do you measure understanding? Are we going to have a ministry of truth? If so, who watches the watchmen?

Most kidney failure causes will affect both kidneys.

His was apparently caused by a lack of quality post-operative care.

The failure was brought on by the surgery itself.

I got chills reading this. Great comparison.

By my calculations it was either an iPhone 4 or 4S.

> People

Children, specifically.

> Perhaps they don't know the full extent of what Facebook is tracking

…is this not bad?

> sideloading apps on iOS is not a one-top process—anyone who used this had some sense of what they were doing

This is not sideloading; this is enterprise app distribution. Users are not self-signing this app.

> And, $20 per month is pretty substantial compensation.

For a child who doesn't know any better, maybe…

> This is not sideloading; this is enterprise app distribution. Users are not self-signing this app.

Just to be clear, that's the process I was referring to. I consider it a form of side loading, because the app is coming from an unofficial, non-Apple source.

Enterprise apps won't run until you manually go into settings and certify that you trust the developer. Far from the most onerous of tasks, to be sure, but significantly more involved than tapping a download button. I don't see how someone could be "duped" into running an enterprise app.

They weren't duped into running it, i.e. they knew they were downloading and installing an app from Facebook Research. They likely did not understand the effect of installing the VPN and trusted root provided by the app.

> They likely did not understand the effect of installing the VPN and trusted root provided by the app.

I'm just not convinced on this point. I think it's likely a lot of people did understand that Facebook could see all their internet traffic, and thought for $20 it was a fair trade. There's a HN user down thread (anonymous5133) who says he used the app and quite liked the exchange.

Now, it's possible these users did not think through all the consequences that sending this data to Facebook could have—but just how much responsibility does Facebook have here? Does Facebook need to say in big red type, "This data could be given to health insurance companies some day and used to deny coverage?" (I'm not even clear if that would be legal, but I bring it up as an oft-cited nightmare scenario.)

They're not sending cash, it's a check or a deposit into a debit card. Not many "children" under the age of 16 have those.

So many of the comments in this thread are literally "Think of the children!"

Actually, the article says the money was paid "via e-gift cards", so I don't think a bank account would have been needed.

I use the app. It is $20 amazon gift card.

May I ask why?

Really? I think I opened my first bank account when I was 10 or so. My parents had to co-sign but other than that were not involved in it. I don't think this was very unusual, the bank even promoted a special free account for young people. I had summer jobs from the age of 12, would you expect me to shove the money under my mattress?

Well your parents surely would be pretty involved with that, if their 12 year old son needs a ride down to the bank. It's not something that's going to be slipping under their noses.

If they're thinking of the child, that is.

Which brings us back to you falsely claiming anyone but you uttered that phrase, and your implied assertion that there is something wrong or odd with looking out for others, including children.

Many of the things they are spying on aren't just the business of the person who's getting the $20 either. If you message someone, do you expect them to turn around and sell that to Facebook? If you are one of Facebook's competitors, don't you deserve to not have them use their monopolistic power to scrape your data?

If there had been a hack of the data collected by this program, people could very easily and rightly have been fired or disciplined for exposing corporate data through a VPN. Even if most things are run through a corporate VPN or intranet, one sign in to your work account on Office 365 is game over for the company.

Whether or not you agree with Apple's rules, the fact that Facebook is willing to violate their Enterprise Certificate agreement with Apple is a red flag.

If Facebook is willing to break an agreement with one of the largest corporations on the planet, what reason is there to think that they will keep any promises they make to individual users?

> I'm having trouble pinpointing a reason why it's bad … $20 per month is pretty substantial compensation

But Facebook's competitors did not consent to their traffic being spied on and had a reasonable expectation that their traffic would remain safe from this type of intervention on iOS devices. Ignoring the ethics of paying users for data and so on, this seems like a straight up case of industrial espionage. The article says that this is how FB spotted the rise of WhatsApp, and presumably informed the offer. They would have known exact usage information, this is espionage via surveillance.

Why do Facebook's competitors get a say in it? Facebook is only getting access to user-visible data; if the users decide to give that away (in exchange for compensation), it's their right. Amazon users aren't Amazon employees—they didn't sign an NDA and are free to share what they know.

At an individual level that may be so, but if they're doing it en-masse then they essentially had secret/sensitive competitor information when they made an offer to buy WA, daily active users, usage-by-demographic etc.

On top of that they would have had access to message formats, headers, encryption protocols and so on, things that are potentially trade secrets. This isn't user-visible data and app owners shouldn't expect that competitors can access it directly from a user's phone.

By that logic, is the NPD collecting trade secrets by tracking game sales? We know a lot of publishers would prefer that information is unknown.

Combining legally-obtained data to come to a conclusion is not illegal or immoral.

You could say the same about predatory payday loans. Just because something is opt-in doesn't mean it can't be manipulative and harmful.

Links below might help put a finger on exactly what's giving you that icky feeling?

To me, the problem seems to be (a) lack of informed consent, compounded by (b) the targeting of a vulnerable population.

[1] https://en.wikipedia.org/wiki/Belmont_Report [2] https://en.wikipedia.org/wiki/Respect_for_persons [3] https://en.wikipedia.org/wiki/Informed_consent

Those two and the lack of a robust, documented framework for Facebook to delete the data it is not interested in analyzing and store what it is securely.

The biggest question I have is why they needed the root to analyze popularity of future competitors. Surely doing domain requested (visible even with TLS) and number/time of requests would be sufficient, and that would have greatly reduced the amount of private data gathered.

$20.- is actually quite cheap for a total invasion of privacy. You can rent out just your Facebook account for $500.- per month[1].

[1]: http://fbdollars.com/

People are opting into data collection in a way that creates a severe security vulnerability. Not only have you given up confidentiality (of everything, including passwords), you've also sacrificed integrity and availability. You can no longer trust anything you do on your phone.

A hypothetical experiment Facebook might be interested in conducting: Do people use Facebook more if Twitter is slow and/or unreliable?

This gave me words for that uncomfortable feeling:


I recently came across a similar market research effort in Switzerland [1] after I noticed the VPN symbol in the status bar on a relative's iPhone when showing them something. I asked about why they (not very tech-savy otherwise) were using a VPN and was told they were participating in a market research project in exchange for some shopping gift cards. As is the case with FB, the research company installs a VPN and their own root certificate.

Of course the implications are outlined in the fine print / data protection agreement when signing up, but I doubt most of the participants are aware of just how far the data collection they enable with this goes...

[1] https://swissmediapanel.ch/ (Link in German)

Interesting - wasn't aware of Swiss Media Panel.

Reading their FAQ they nicely pack what's going on in flowery language e.g. "Is the Swiss Media Software a Virus or Spyware?"

The Swiss Media Software is not a Virus and also not Spyware; it is not malicious and does not do harm to your computer, phone or tablet. The Swiss Media Software only observes the behaviour of Internet users that they have approved (this last sentence could be a bad translation by me).

That said the companies behind it; Net-Metrix and Intervista, are basically harmless - they produce consumer studies and are something like the "Nielsen" of Switzerland. The bigger risk here IMO is they themselves get hacked - knowing a little about Net-Metrix for example, I doubt they have the resources to properly protect their infrastructure.

I feel like the translation overstates the fine-grainedness of the consent a bit, it's more along the lines of "Swiss Media Panel is a consent-based [i.e. the user has consented to having the application running on their device] application that tracks the behavior of internet users"

Security and also how far they actually go in separating the tracked data from your demographic & potentially personally identifiable data is definitely a concern, next to the obvious issue of how informed one can consider the consent they get from their users...

Seems to me like the law needs to be clearer about how to inform users in cases like these. Surely, it's deceptive to tell someone they're getting paid for installing a "market reasearch" app which actually records all online activity. Charging companies, who knowingly deceive users like this, with fraud sounds reasonable to me.

I agree that the law should probably be changed, but for slightly different reasons.

They are clearly informed that the app will track information regarding their online activities, device usage behavior and applications they use.

I think the main issue is that users without a tech background are just not aware of the full implications of allowing a third party to collect this kind of data, even decrypting their HTTPS traffic and tracking everything they do online.

The statement by Strafach in the original article sums it up quite nicely:

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

You can’t be clearly informed and not aware at the same time.

Which makes this fraud, right?

In the same way automotive manufacturers are held accountable even if their was no intention to cause harm, the software industry needs to be held accountable.

We need to have professional organisations, and government regulators, working to ensure some kind of general industry best practice, where software developers can initially start getting tapped on the shoulder, then given a series of rapidly increasing penalties until the industry gets the point that it can’t keep making out it’s the wild wild west.

And this is why I don’t believe software development is a proper serious profession. The proper professions, here in Australia at least, are granted the authority to witness statutory declarations. I can go to a qualified vet, doctor, engineer, chiropractor(!), police officer, school teacher, postal worker, the list goes on[1], because these professions have a chain of trust.

And yet we trust(?) software developers and their employees with our most sensitive data!

1. https://www.ag.gov.au/Publications/Statutory-declarations/Pa...

I guess I don't understand what is so deceptive about this? At least not anymore than what Google does for example?

Do we really think computer illiterate people know that Google can infer a huge about of sensitive information about their end-users without them ever ticking "i accept" or signing up for an account?

At least with this they have to take explicit actions like accepting the terms and installing the tracker before they're tracked. They even get compensated for it.

In Google's case you don't get anything.

In terms of a newspaper.. In Google's case they might just get the headlines of what you're doing, in this case the company/Facebook gets every single word, space, and punctuation.

In an odd twist, this is precisely what some tech critics have been pushing for—for consumers to be monetarily compensated for the data they're giving up to tech giants:


One issue with this is brought up in the article:

> “The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

I can easily imagine being poor enough to engage into this contract, despite knowing exactly what power this gives Facebook. Don't confuse desperation for ignorance.

How would someone have an iPhone, cell service, and a PayPal if they don't have $20 a month?

Could they've achieved this another way? Still getting all the metadata for market research without a root certificate? If not, the step wasn't that appalling.

Compensating people for information on their behaviour is nothing new. If you participate in a program to report daily purchases you probably give away as many information and yet it's not viewed as controversial. The fact that Facebook doesn't have a great track record is problematic but generally, I don't see a big issue.

I agree that it's ethically dubious, and wasn't trying to defend the move by any means.

However, I _would_ contend with the assertion that "there is no good way to articulate just how much power is handed to Facebook when you do this." Sure there is—just not one that would look good for Facebook.

Why would people be unable to consent? "You give us full access to everything on your phone for money" is a way to articulate this to anybody.

Because it’s not transitive. If I send you a personal message, I didn’t consent to you giving that to Facebook who will use my phone number to add it to the shadow profile they keep on me. Facebook might as well pay in pieces of silver.

right, but considering most web traffic is encrypted nowadays, it's pretty much mandatory to do MITM to analyze traffic.

Perhaps that should be taken as a hint rather than a challenge.

But you can still get the domain/subdomain requested, which should be enough to find out service used and time/frequency of use. That will change with encrypted SNI (https://blog.cloudflare.com/encrypted-sni/), which is not yet mainstream, but large services Facebook is tracking have dedicated IP pools so that's not a barrier.

So for the reported use case, "hey, tiktok looks good, let's find out how many people use it before we buy it out," it would seem that non-MITM would be plenty (and technically easier/lower resource to do, VPN could be kept on device and the pre-anonymized data sent up to the cloud, saving them server costs and bandwidth.

The ethical problem here isn't related to compensation. It's the lack of informed consent for human subjects belonging to a vulnerable population.[1]

This is bad juju.

1. https://en.wikipedia.org/wiki/Respect_for_persons

That may be true for children (who should need parental consent which Facebook apparently asked for). But just because someone doesn't have a deep understanding of tech doesn't mean they can't consent. You can't exclude people from signing an agreement just because they don't have a CS degree.

But it's also paired with Facebook gaining unprecedented data access that they don't have on most users. Crazy.

I think the common position is "Instagram should pay you for your photos" and "google translate should pay people for exploiting their translations", not "big companies should track everything you do and pay you a small amount of compensation".

It seems unlikely that this pilot program would be the same as a production run, compensation wise.

Being opt-in and getting compensated are the two things I've seen people want from usage of their data. No one should have an issue with this since it does both.

I harbor serious doubts that most of the 'volunteers' here know exactly what it is they're providing -- the sign-up sheet probably didn't say "we will know very specifically your porn-watching habits" e.g.

I think that this is a fairly common when it comes to technology. The terms and conditions seem reasonable ("we collect some data to provide more relevant ads"), but when you look a bit more closely they build a personal file that contains who you communicate (email/text/call) with and how often, where you go, what you buy, which websites you visit, which videos you watch, etc to the extent that they are able. My mother is very smart, well-educated (she has a PhD), and relatively tech savvy (she works in scientific computing), but she was still floored when I told her about some of the tracking Facebook and Google perform. Google recording her location (which she technically agreed to, but did not realize) was enough that she asked me to help her migrate away from Gmail. She probably would have managed without my assistance, but the barrier would have been much higher.

"We will hold logs of you saying awful things to your girlfriend as you're breaking up in a file on you for the next 50 years" is more accurate. Privacy nihilism comes either from a lack of imagination, or a lack of perceived power.

13 to 17 year olds aren't supposed to be able to access porn legally, so can Facebook plausibly deny that this is something they are monitoring?

FB is not supposed to make deals with minors without adult supervision…

It was open to adults as well, but yes. However, they had no duty to specifically enumerate that particular case anyway lol. And technically that is a crime under US law to distribute porn to minors, but its not a crime for the users to view it, so if they connect to a website intended for and operating in another country without such laws, there is no legal issue.

While I'm generally all for opt-in and free decision making I think some lines should only be crossed in special circumstances. Similarly to medical procedures that are only legal if the patient is very clearly informed about all potential risks (and that includes even those risks that are really not that probable) by an actual human being and not by just clicking on a button. In the case of Facebook they would in my opinion need to state very clearly that there is an albeit small risk of a breach and all collected data could be made public ("for example you employer might suddenly know which porn websites you are visiting or what people you have googled")

Big opt-ins require big explaining because people can only truly make free decisions if there is an actual effort to inform them about what is happening.

Edit: so maybe this is a bit extreme because I realize that this might similarly apply to (for example) phone manufacturers. I still think that actually analysing the traffic is a bigger risk than simply providing the phone/browser to generate the traffic because of the centralized target that is Facebook.

The fact that Facebook tried to hide their involvement by using intermediaries like “uTest” says something though, right?

Regardless of whether Facebook was also trying to deceive users specifically—which we'll never know—they likely wanted to deceive Apple. I'm not going to blame any developer for attempting to bypass Apple's stupid restrictions.

Using intermediaries also allowed Facebook to technically not violate Apple's enterprise certificate contract (because the intermediaries were in violation instead).

> Using intermediaries also allowed Facebook to technically not violate Apple's enterprise certificate contract (because the intermediaries were in violation instead).

I actually though they would have done that, but it used the regular "iPhone Distribution: Facebook, Inc. (In-House)" cert, they didn't even create a shell entity and get a new one. Reports say Apple has revoked this cert, breaking all internal (legitimate) apps and possibly creating quite a bit of chaos for internal ops.{1} Their separate Apple Developer Program organization account, used to deploy TestFlight public and private betas and App Store apps, as well as local deployment to a small number of devices without Apple involvement for development testing, is not affected.

The intermediaries may or may not face consequences if they have separate agreements with Apple, but they did not use any Apple products to do their part and have not violated anything with Apple.

{1} https://www.theverge.com/2019/1/30/18203551/apple-facebook-b...

Huh, I stand corrected. I'm pretty surprised Facebook used their own cert. The fact it has been revoked was 100% predictable.

When collecting data like this, best to leave it to the pros - whether they're internal, or you have to contract out.

This is a massive overreach. I would be pretty shocked if the people involved in this "research program" truly understood just how much access to their private data they were granting Facebook.

Maybe there wouldn't be an issue if they were being 100% transparent and explicit about what information they are collecting and how it will be used. However, the article seems to paint a fairly compelling picture that FB is not acting in good faith.

The fact that they're targeting kids makes it that much more unethical.

It depends very much on what users are told they are signing up for. The ad in the article says a "paid social media research study", which couldn't be more vague compared to the level of access Facebook are granted through the root certificate.

Plus, the deliberate targeting of children that won't know better. And asking people to upload their Amazon order history! Pretty scummy.

I can name two things missing besides opt-in and compensation:

* selected user's age;

* proper disclosure.

This happened right after Onavo was blocked - it's opt-in and with compensation only thanks to that.

> No one should have an issue with this since it does both.

Surely there's something to be said about age. There's a reason 14-year-olds can't enter into a legally binding contract.

Besides this, there's also the issue of how clear it is that the app is collecting private data. The article says:

"Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using."

which seems a lot like Facebook luring users into giving them their data without the users' knowledge.

Yeah.. these are minors though. It may not be illegal explicitly but the fact that they are minors feels different than the perspective of an enlightened bargain.

The group of people using this "App" by this can be broken up into 2 main groups:

1. Those who understand what they are signing away and need $20/mo more than they need privacy

2. Those who don't understand or don't understand fully what they are signing away and see it as free money

Preying on either group is disgusting and wrong. I'm really interested to see what Apple does here, they have taken a hard line on privacy and I don't doubt they will kill this app but if FB wants to play wack-a-mole they WILL win (see iOS sideloading scene), for me the big question is will Apple take down the FB apps?

We've seen Netflix, Uber, FB, Amazon, and more skirt the rules of the App Store in the past, they've barely gotten a slack on the wrist (in public at least). At what point does Apple take a real stand and say no? Cause so far $$$$$ has ALWAYS stopped them, I really do believe they care about privacy, I don't know know if the shareholders do.

Edit: Typo

Since they were also targeting children, 13 yo and up, they probably fit into both categories and is extra unethical...

As far as I know, when Apple discovered Uber doing some shady, but way less messed up things, they were flat out threatened to be kicked out. Problem is, this isn't Facebook's first rodeo, their previous app that did this was kicked off.

Wait a sec, 2 is wrong but 1 sounds positive to me. For some people this money could be incredibly important.

> For some people this money could be incredibly important.

That's exactly the problem. In Human Subjects Research this might be considered a violation of Informed Consent in the form of undue influence. From the Belmont Report [1]:

An agreement to participate in research constitutes a valid consent only if voluntarily given. This element of informed consent requires conditions free of coercion and undue influence. Coercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance. Undue influence, by contrast, occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance. Also, inducements that would ordinarily be acceptable may become undue influences if the subject is especially vulnerable.

Note the "especially vulnerable" part at the end there.

[1] https://www.hhs.gov/ohrp/regulations-and-policy/belmont-repo...

> > For some people this money could be incredibly important.

> That's exactly the problem.

I thoroughly disagree, and I feel like speaking up about this particular philosophy of consent.

If I buy a used iPhone for $100 from someone who would die if they didn't get the $100, have I acted unethically? Whereas if I bought it from someone who didn't really need the $100, I wouldn't be acting unethically?

This sounds not only wrong, but highly counter-productive to me, since the consequence of not entering into this trade, just because the seller really needs the money, is that the seller dies. How does that make any of us better off?

As a society, we should encourage trading with people who really need the money, not label it as unethical. Whether a trade is unethical or not can be determined solely from the trade itself, not how much either (or both of the parties) needs the proceeds from the trade.

Example illustrating the absurdity: imagine two people who both really need the proceeds trading with each other. Ouch! According to your philosophy, they are both acting unethically (when in fact they are doing the only reasonable thing).

> If I buy a used iPhone for $100 from someone who would die if they didn't get the $100, have I acted unethically?

In some cases, you have clearly acted unethically. For instance, if the iPhone is worth $800 and you have more money, but you're getting the $100 price because the man is dying now and there's nobody else around to offer him more than $100.

Doesn’t this argument give us child labour and sweat shops? Fortunately most places have laws that set a minimum standard to protect society against those who have lower ethical standards.

>If I buy a used iPhone for $100 from someone who would die if they didn't get the $100, have I acted unethically?

Yes, it would be unethical to both parties.

In the first case, it is unethical because you are taking advantage of someone's dire need to get a better price on an iPhone.

In the second case you are denying yourself a clear cut opportunity to really help someone in need.

To be in a position to help someone in such a state is a privilege that does not come around often.

So say I don't buy the iPhone because I consider it unethical to pay only $100, but since I don't actually need a new phone, I'm not going to pay $200+. The seller dies because they were $100 short of some essential medicine they needed, or whatever. Is this really the outcome you want to see?

In an ideal world, I would just pay the person $100 and not take their phone—but, c'mon, this isn't the world we're living in. People die every day in the US—never mind the rest of the world—because they couldn't afford medicine/shelter/food/etc

You're contriving a situation where the seller doesn't have any other options AND you also don't have any ability to buy it to later sell at a profit (which would give the seller the ability to negotiate a better price than $100 with you while still allowing you a reasonable profit when you sell it).

I agree you can contrive a situation where the best ethical option is to pay the seller $100 for the phone but you really have to work on it (and the situation is pretty contrived to begin with)

> In an ideal world, I would just pay the person $100 and not take their phone—but, c'mon, this isn't the world we're living in.

In this case you could make it that kind of world, for that person, just for $100.

To be placed in a position where it's so easy to help someone is a privilege.

And if I have enough money, work in tech and still value $20 more than the (additional) data I give up? Especially knowing what's already collected the difference isn't necessarily that big. Am I not allowed to give consent then?

Tough question: would targeting the offer away from poor people make it more ethical?

Are we, as a society, ok with people being desperate enough to need $20 more than privacy?

Or maybe privacy isn't something we should care about or at least value as much as we do as a society. Maybe I'm wrong. I think I see the dangers down the road but maybe it's just a mirage and privacy will die and it won't be used against use by people in power or with money.

> Are we, as a society, ok with people being desperate enough to need $20 more than privacy?

This is not the question that's being asked here. The fact is, there are people like that, and for them, these things are great.

They're not solutions, they're band-aids. But if you're not ok with the situation existing, removing band-aids isn't particularly productive.

That's kind of what I was trying to get at, probably unsuccessfully. That maybe the anger shouldn't lie with FB or the fact this program exists but with the fact we are in place that people will accept that little for so much.

This is like saying child labor is a band-aid, not a solution, but that we shouldn't ban it because it helps children. The whole point is that it's an inherently exploitative thing that is a net negative for society even with people who need money getting paid.

Right, because installing spyware is the same thing as forcing young kids to work. What?

No questions, this is creepy. But nobody was talking about a ban. The post I replied to asked whether we should be okay with people who need $20 that badly.

The answer is no, we shouldn't be ok with it. But you're not solving the situation by banning this, you're making it worse if anything.

"We shouldn't be ok with people being homeless." "Okay, let's make 'being homeless' illegal. Problem solved!" "???"

And yes, this logic has been used before. It hasn't solved homelessness, btw.

We shouldn't be ok with people needing money that badly, and we also shouldn't be ok with creating an economic dependency on those people selling their privacy, the same way we've banned economic dependency on child labor. That is not at all equivalent to banning homelessness. It's more like banning hiring homeless people to fight lions with their bare hands for entertainment.

This is probably something you should ask the person who's desperate enough to need that $20, than to decide for them from your point of reference.

I'm not attacking them for taking it, I harbor no ill will toward either group #1 or #2 of my original comment. I'm asking if we are ok with this being necessary in the first place. I'm not ok with it.

I have the app installed on my phone. I have it installed because I want the $20 amazon. I don't know if I really "need" the $20 amazon but it is 100% passive once it is installed. You literally need to do nothing. Every month they send you $20. I would not uninstall it even given the privacy concerns.

If you guys are so concerned about it then create something that puts cash in my pocket. I'll gladly run whatever app you want on my phone if you pay me.

Now you actually do make me have ill will toward #2. Enjoy your $20. A free lunch/month, right?

Edit: and now they shut it down. You can thank us privacy advocates later.

nobody is judging the desperate person, we are judging the net effect of the rest of sociiety on them.

it's provably affordable to give everyone the average rent of the world which covers housing (rent of buildings), food (rent of farmland), energy (rent of space used for solar panels, windmills, ...), natural resources (rent of mines).

Perhaps. I notice in most societies we say there are all sorts of things that you can’t do for money with your own time and body, however desperate you may be. Selling your organs and selling sex being examples.

Selling your organs and selling sex being examples.

Resulting in thousands of deaths due to organ shortages, and sex workers being abused by pimps and corrupt cops.

Read “never let me go” by Kazuo Ishiguro, or actually even watch the movie that was made of it (same title)

That illustrates fairly well why having an underclass who provides healthy organs to the rich is a utterly barbaric idea.

> and sex workers being abused by pimps and corrupt cops

I don't have a strong opinion either way, but my understanding is that it's very very far from proven that legalizing prostitution improves the lot of sex workers -- I am led to believe that trafficking becomes _more_ of a problem in localities where sex work is legal.

Further, and again no strong personal opinion on the matter, but I suspect you'd see a huge rise in coerced organ selling if it became legalized.

These are questions societies need to answer for themselves, and my central point was that there's already precedent for societies deciding that they don't benefit when some things are available for sale, even if an individual in the moment says they want to sell it.

how is it any different than not being willing to pay $3/month for private email, and instead getting it free from google/microsoft?

I don't have a good answer for you. My bad answer is I trust Google/Microsoft a little more when they say they aren't looking at your emails than I trust FB with Everything you do on your phone and no guarantee of what they are doing with it.

I probably misunderstood you. From the very beginning of Gmail, Google has been very upfront that they do look at your emails, each and every one of them. They use automation for that, is that what you meant? No fleshy humans physically reading with their own eyeballs?

I mean, do you think we should outlaw strippers? They sure give up more privacy than this for money. What about other things people sacrifice for money? Coal miners? Crab fisherman?

Please demonstrate how a naked person is giving up more privacy than what is being discussed in the linked article.

When a person is paid to strip you, nor the house, get to read everything they do on their smartphones.

I agreed with TheSpiceIsLife, I don't see how those compare. Seeing someone naked is so completely different than having full access to what they do on their smart phone. In fact, on their smart phone there are probably naked pictures.

Teenage strippers are frowned upon in most jurisdictions.

You have made the assumption that the sellers are desperate. Do you have evidence for that?

I was replying to the parent of my comment

> For some people this money could be incredibly important.

So I took this to mean "desperate enough" as in there are scales/levels of "desperate"-ness. Maybe desperate is the wrong word and the "enough" modifier wasn't obvious in my meaning.

Maybe a better way to put it:

Are we, as a society, ok with people needing $20 more than their privacy?

I was trying to convey that I imagine I would have to be pretty desperate to give up my privacy for $20/mo.

First of all I doubt that most people who go into this have a feeling of desperation, especially not the teenagers that are targeted. (I do sort of have an issue with the targeting, though when I think about it, I bet that teenagers actually understand what they are giving up better than 50 year olds). So I would rephrase that as "are we, as a society, ok with people _wanting_ $20 more than privacy", to which my answer is yes. I guess I would have a problem if it were desperation, but then I don't think that in this sense being "desperate" for $20 off a smart phone plan is correct usage of the term.

The app in question monitors all private communication that you have with others, who most certainly did not consent to have their own privacy taken away. So no, it's not okay to steal someone else's private information and put it on sale, however desperate you may be.

It's even more unethical to encourage people to do so, like FB did.

what if parents force their children to participate?

Wowsers! That hadn’t crossed my mind.

What if you have five kids. That’s $100 a month.

That... is... disgusting and will be very hard to pass up. This whole thing just makes me sad.

That's like the dictionary definition of "exploitation".

> d need $20/mo more than they need privacy

What if you simply don't care if some researchers have access to your data?

I'd honestly consider doing this myself, even though I am a highly paid software engineer, because it really does sound like "free money".

Although I probably won't, because I don't want to go through the hassle of sideloading an app on my phone (but if it was a 1 click thing, I'd seriously consider it).

You are leaving out group three. Those who want the money and will put it on a spare phone and game Facebook by using the phone for nonsense that has little to do with their personal life. As a similar example, just because I use Facebook doesn't mean I click "like" on things I actually believe in. Or click on ads of products I would be interested in. Quite the opposite.

This is fair, I know when I was younger we loved finding ways to game systems like this but we never were playing with the fire that is a monitored VPN. Yes a separate phone solves most the issues but this pales in comparison to shit like clicking on ads for pennies in high school to get paid.

I didn't mean to imply that group 3 includes kids. The targeting to kids I consider unethical. I agree with you that a monitored VPN is not something someone under legal adult age should be considered capable of consenting to. In fact I would say it's one of those cases that even the parents can't consent to. It might be comparable to a parent consenting to their child's phone line being tapped by a third party for a monthly payment. Clearly unethical, and probably illegal in a lot of places.

>Preying on either group is disgusting and wrong.

I think it's fine for group #1. If the $20 is that important to them then I'd rather not deny them the opportunity.

I would just sign up for it and use on a seedbox.

> Facebook first got into the data-sniffing business when it acquired Onavo [..] to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger [...] and to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup

This makes a lot more sense now. At that time the tech sphere was surprised at the price tag which is expected as people outside Fb perhaps didn't have these metrics.

I know that it's not really insider trading, but the concept really does sound similar..

It is actually crazy that this is not discussed more often.

How they heck is this fair?

I know, right? Spy on your competition without anyone's consent and then simply make them an offer they can't refuse...

I don't see why Apple couldn't take down _all_ Facebook apps until they comply. It seems like Apple has the real power here.

Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them? It would be FB calling Apple's bluff. Users would riot though. Apple has to decide between supporting privacy or supporting their users' choice. Apple has made the decision to take away users' choice in the past in the name of safety/protection, I could see them doing it again.

I can bet this is the LAST thing they want to see in the headlines. It forces them to address it, maybe they have a plan ready to go for this eventuality, a whole PR push and I kind of hope they do. If they don't they either look weak on privacy or have to roll out some half-baked plan/proposal/nebulous idea on how to protect users privacy better in iOS 13 or something like that.

Right now Apple is doing a whole hell of a lot of taking out of both sides of it's mouth and I understand it's a hard line to walk, I'm not saying I could do it better. FB's practices in general are probably an affront to Apple in general but skirting Apple's limitations to piss all over privacy and essentially turn an iPhone into an Android-level of data collection, I can imagine Apple is PISSED. I just really hope they had something planned for this day.

> Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them?

Enterprise developer accounts (the ones that can issue apps signed such that they can be sideloaded on any device) aren't something just anyone can go online and sign up for-- they require manual approval with proof of a business's identity before they're created.

So, unless Facebook starts opening well-disguised shell companies or something along those lines to circumvent any restrictions Apple might put on them, this will be over as soon as Apple revokes Facebook's enterprise distribution account. (Or, more likely, threatens Facebook into dropping the VPN app, because FB probably doesn't want to lose the ability to distribute legitimate internal-use apps to their employees.)

> Enterprise developer accounts (the ones that can issue apps signed such that they can be sideloaded on any device) aren't something just anyone can go online and sign up for-- they require manual approval with proof of a business's identity before they're created.

It's my understanding that faking these business identities is the entire business model of iOS sideloaded services (see the subreddit for examples [0]) so I don't think it's that difficult to do. That said, I'd be shocked if Apple let them go that far as to keep spinning out fake businesses but then again if FB thinks it can get away with it what's stopping them?

[0] https://www.reddit.com/r/sideloaded/

That would be getting into serious fraud, arguably criminal under CFAA. Normally Apple isn't interested in prosecuting these people as its just sideloading, which is some minor copyright violations and a security risk in their view, but if Facebook did it after having been banned themselves, having a written statement from Apple that these apps were violating, and then they go and pay a third party or deliberately make a shell company to defraud Apple? That could provoke a total business embargo between the companies which would suffocate FB.

> as soon as Apple revokes Facebook's enterprise distribution account

Good call.


> Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

> Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them? It would be FB calling Apple's bluff. Users would riot though. Apple has to decide between supporting privacy or supporting their users' choice. Apple has made the decision to take away users' choice in the past in the name of safety/protection, I could see them doing it again.

Apple could block all updates to Facebook's apps until Facebook complies with their policies. That would get Facebook's attention in a way that wouldn't alienate Apple's users.

Facebook needs mobile, which means they need Apple more than Apple needs them.

Now that's a really good point also I know Apple has straight up pulled apps (Tumblr) in the past. The don't uninstall them from the user's devices (though they could) but Apple does have a number of tools in it's toolbox.

1. Ban all accounts that were publishing this "VPN" (I assume FB didn't use it's main account for any of this, if they did leave that account alone and ban the others)

2. Block updates to FB for some period of time if they try to open new accounts and get caught

3. Delete FB Apps from App Store

4. Delete FB Apps from iOS devices

> Facebook needs mobile, which means they need Apple more than Apple needs them.

Is that actually true? Last time I checked iPhone only had a 20% market share. People buy phones, including iPhones, to do stuff with them. What Facebook provides is the stuff a huge part of the users want to do with their phone.

Imagine iPhone users can no longer WhatsApp/FB-messanger with their Android using friends. How many people will think twice before buying an iPhone again? Facebook screws with privacy the users don't care about (yes, the average user doesn't give a shit, especially if he gets paid), while Apple would screw with the users apps, which they care about a lot! Apple is in the disadvantage here. Especially since their whole business model is a better user experience for overpriced hardware.

>> Facebook needs mobile, which means they need Apple more than Apple needs them.

> Is that actually true? Last time I checked iPhone only had a 20% market share.

But it's a relatively premium market segment that Facebook can't afford to lose. If they cede it, they're taking a serious risk that a serious competitor could emerge on the platform that turns them into the next MySpace. That 20% could pull the rest of the market its way, since whatever they migrate to would likely be available on all platforms.

This isn't a far-fetched idea. It's basically what Facebook did with it's initial rollout exclusively to the Ivy League schools.

> Imagine iPhone users can no longer WhatsApp/FB-messanger with their Android using friends. How many people will think twice before buying an iPhone again?

That might have been true five years ago, but Facebook's products are much less compelling now, for a whole host of reasons. Cross-platform replacements would quickly emerge to fill the niches Facebook was driven out of. Many people would get mad about not having Facebook on their phone, but most of them would get over it. But others are already primed to abandon Facebook, they're just waiting for a push.

> Apple is in the disadvantage here.

No, Facebook is, since their dominance of social netoworking is so tenuous that they need to convince people to use spy-VPNs to stay on top of emerging competitors.

> Cross-platform replacements would quickly emerge

For messaging, one could argue that it already exists: Signal (which has already benefited from FB’s announcement that they collect WhatsApp)

To the average user, this would look Apple was exerting control over devices that are rightfully owned by the user, in the same way it looked to people in the free software community during the advent of the app store and the locked down iPhone that needs to be jailbroken in order to do what you want with it. They would still want facebook, an app they'd been using for lots of minutes a day, and they'd be denied it.

It might actually get ordinary users interested in making it so Apple doesn't have total control of what can be installed on the phones they purchased. They won't see all the philosophy behind it that people in the free software community do, but it would point them in the right direction.

Or I guess Apple can send them a cease and desist letter, since they are obviously violating the enterprise account license agreements.

I mean it depends on what you mean by "possible". I'm pretty sure Apple has the technical capability to delete apps over the air. The more interesting question is what they're willing to do: are they willing to pull FB from every iPhone? I don't think so.

If nothing else they could issue an iOS update that permanently bans the trusted root certificate, erases all apps, and directly blacklists the Facebook certs. Heck, they could even block connections to facebook servers at the MAC layer if they wanted to. And they could require users to install it to get future updates. Of course, that would never happen.

Apple can remove the app from the App Store but I'm pretty sure they can't delete apps over the air from a given phone.

If they can OTA update the OS they can delete whatever they want, all they need to do is call the internal API that deletes apps.

Could Apple, in a software update, put Facebooks apps in a sandbox, or wrapper, or VM type thing? Not sure of the technical term, whereby the device would only return encrypted or dummy device data? So that regardless of what permissions FB app has been given, each time it goes to get that data the VM interrupts and asks the user.

Say, FB wants to get your location, even though FB has location permissions a pop up says "Facebook is attempting to find your location. Do you consent to sending your location to Facbook?" "Facebook is attempting to read the Names, Telephone Numbers and Addresses of everybody in your contact list. Do you consent to this?" every time facebook app makes the request?

I'd be a bad user experience, but Apple could say it cares about privacy and blame facebook.

Not really. It’s a hard pill to swallow for Apple that most iPhone users are active Instagram/Facebook/WhatsApp/Messenger users. If they took those apps down people would be super pissed. I know I’d be extremely annoyed.

It would be an interesting twist of irony if they did take them down and there was a massive backlash against Apple. I have a sneaking suspicion that the media and Twitterati are more up in arms about all this than the users themselves.

Reasonable action here would be to threaten to pull the affected apps off the store if Facebook doesn't react within a few weeks. I am very confident that Facebook would not take the amount of bad press this would create. Apple has a lot of leverage here, they don't need to just ban facebook apps outright.

I disagree. It’s a pretty equal symbiotic relationship IMO. I keep reading people saying “NO other developer would ever get away with this!” Yes they would- if they had 2.2 billion active users on their platform. This may shock a lot in the tech community, but for some companies the rules don’t apply. Ask Procter and Gamble why Wal-Mart gets better prices than a local grocery. I truly believe Apple needs Facebook at least as much as Facebook needs Apple.

I think you might be underestimating to what degree this behaviour actually violates the sort of standards that Apple sets out for its products. Gaining control over all information on your device, including the content of private messages of teenage users shouldn't fly. Preventing this sort of stuff is one of the reasons people pay a premium for apple products, and Cook has stressed this over the last few years.

On top of this you can add the fact that they basically shipped renamed onavo code, which was already banned from the app store, so this is de facto a violation of Apple's rules.

It's in the long term interest of Apple to not be soft on this stuff, it's not symbiotic.

> Preventing this sort of stuff is one of the reasons people pay a premium for apple products, and Cook has stressed this over the last few years.

The device belongs to the user. It is fully within the user's legal right to install apps on their phone, even if Apple disgrees with those apps.

If you pay Apple $99 a year you can install whatever you want on your own phone only. There are no restrictions on directly installing IPAs with Cydia Impactor or Xcode. You can actually do it for free, but only a few apps at a time and must renew every 7 days.

If you buy an iPhone, without paying Apple 99$ a year, you are also legally allowed to install whatever you want.

If someone ones the phone, it is within their full legal right to do whatever they want with it. No extra fee necessary.

That is your opinion (and I would also appreciate being able to do anything I want with my iPhone), but it's clearly not how Apple sees it.

No, it is not "my opinion".

It is instead how the law works.

Apple tried, and failed, to sue people for doing things with the phones that were legally purchased by the individual.

If you install something Apple doesn't like, it is your full legal right to do so. The courts proved this.

I could argue that Apple is just as responsible for this as Facebook. How can they claim to take privacy seriously if it’s clearly possible for bad actors to get around the rules multiple times! How is Apple any less implicated in this than Facebook was in the CA scandal when a bad actor violated its policies and posed as an academic research project to gain access to user data it then sold to third parties? If you are going to hold Facebook accountable for CA, why does Apple get a pass when it enables third parties access to my data?

> I could argue that Apple is just as responsible for this as Facebook.

And you'd be wrong.

> How can they claim to take privacy seriously if it’s clearly possible for bad actors to get around the rules multiple times!

That's BS. You might as well say: "how can they claim to take security seriously, if it's clearly possible for bad actors to find exploitable bugs in their products multiple times!"

Apple has a tough job, and it won't do it perfectly because no one can. It's bizarre to claim that it's excellent but not perfect performance somehow makes it guilty of the things it's trying to stop.

You can use Instagram and Facebook without the app. It's a lot easier to limit what data Facebook has access to that way.

Block 3rd party cookies, install an adblocker and delete the cookies when you are done with FB/Instagram.

What do Apple users want? Probably not to have Facebook, Messenger, and Instagram taken away from them on their devices...

Probably not to have Facebook piss all over their privacy.

I suspect there's been enough revelations about FB practices that many users would support Apple if they blocked the main apps. For a temporary block anyway.

Much of the audience here is probably much more understanding of the situation and aligned with your (presumed) position, myself included.

Most of the general population probably neither understands nor cares that much if someone is watching what sites they visit or other basic privacy items and if you make them choose between privacy (especially privacy of others) and being able to post a picture of their lunch, many will choose the latter.

I have had a LOT of conversations with non technical people about this recently.

“My phone is listening to my conversations” is how it goes - people know this tracking is happening, they hate it and find it intensely creepy, they just don’t know the mechanism being used.

Unfortunately fb pissing over everyone’s privacy is invisible and intangible. So for most it doesn’t exist.

Disabling an app would be very noticeable and would anger many people.

> Probably not to have Facebook piss all over their privacy.

Ummm, then those people don't have to side load an app that sells your data for money.

They might get into antitrust litigation if they revoked the Developer Certificate (public apps and public betas), which was not breached, rather than just the Enterprise (used here, and for legitimate employee apps). Apparently they have done the latter causing massive chaos; the former would be an absolute nuclear option.

No iPhone user could use any Facebook apps, anywhere in the world, which would make this story front page on every newspaper. Business could no longer manage their ads spots or use iOS devices for social media. They will likely be shocked at the unwarranted disruption, rightly blame Facebook for it, and cut their spend on ads. Both PR departments would be working full steam on a war of worlds, disrupting all other work. Numerous suits would be filed. Meanwhile, Facebook stock would crash, leading to numerous investor lawsuits, especially since Facebook clearly risked this by blatantly violating contracts. Institutional investors will cut losses and pull out, further driving the price down.

I'd love to see it happen. But Apple doesn't want to, and honestly can't be expected to, pull the nuclear option just as a punishment for this. They would incur massive PR and legal expenses in response.

Smarter, less disruptive move: block research and put all Facebook apps in some kind of security sandbox with the sole intent to slow down the experience.

This might go against antitrust regulation.

At least at this point, antitrust regulation hasn't prevented Apple from removing apps from its store that violate its policies. I don't see how it would be any different if they created a penalty short of removal that's only applied to bad actors with a history of needing it. This is needed for more than just Facebook's stuff, it could also apply to trojan privacy invaders like the Weather Channel app.

They could slow walk updates as they do unusually thorough privacy audits, and perhaps even apply extra access restrictions (e.g. skewing location, forbid use of certain permissions, etc).

Antitrust? Google play still exists.

bit of a chicken or egg problem. if facebook/instagram wasn't on the App Store, how many people would switch phones __immediately__?

Do they? - Competition regulators could see this as abuse of power and use this to destroy Apple's AppStore model. Quite risky ...

No, they can't, because there's still competition in the marketplace in the form of Android. Apple isn't a monopoly, if you don't like the App Store there's 20 other phones at your carrier you can buy instead.

In the press, apple might wind up taking the blame if instagram were to become unavailable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact