Some of the highlights from Will Strafach, who did the actual app research for TechCrunch:
#1 "they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI."
#2 "the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like."
My editorializing - I have been suspicious of Facebook getting the "submarine" treatment (1) but the insane scuminess of #1 above, which essentially is a big fuck you to Apple, pretty well supports the recent view that FB will essentially break any rule that serves to further their own ends.
Side note: This wouldn't get around cert-pinning? Even if a new trusted CA is installed on the system, an app implementing cert pinning still wouldn't trust this new CA. Seems that could be a wise move for Facebook's rivals that want to limit snooping.
Edit: on 2nd thought even if Facebook can't decrypt a particular app's traffic, just knowing how many requests it makes, how large they are, and how often, could still provide some useful insights into an app's usage.
If the app is hard coded it shouldn't trust another cert. Note though that browsers, like Chrome, ignore cert pinning if the cert chains up to a locally trusted CA. So the answer is more "It Depends".
> Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
EDIT: Spaced on the fact this is a phone app. While Chrome on Windows ignores certificate pins, I'm unsure if this also applies to Android / iOS root stores as well.
You can enforce certificate pinning in your own native app. You can even go as far as not trusting the hookable (on a JBen device) system libraries and link in your own OpenSSL or something similar.
I was under the impression Chrome was dropping HPKP support (it was deprecated in 2017), so I imagine this would be across all products - desktop and mobile.
As I've mentioned elsewhere in other threads, this app doesn't actually obtain root access (that is, it's not a jailbreak); the article/title is confusing a "root certificate" with "root access".
Yeah, the idea is to remove pinning with frida and then your proxy will see all the traffic. There's some sample scripts here, and if you google around you'll find one for iOS: https://codeshare.frida.re/
Installing a root certificate is a major breach of privacy. This is what Fiddler does on windows when you need development access to a machine and want to view data being transferred between all websites including https.
By using a VPN they forced all traffic to go through their servers, and with the root certificate, they are able to monitor and gather data from every single app and website users visit/use. Which would include medical apps, chat apps, Maps/gps apps and even core operating system apps. So for users using Facebook's VPN they are effectively able to mine data which actually belongs to other apps/websites.
Imagine the malicious things someone could do with this level of access. And none of the usual mechanisms to, say, detect widespread compromise of a Certificate Authority would apply here.
They could drain your bank account, hiding the transactions and adjusting the balance whenever you viewed the mobile banking website or used your mobile banking app and adjusting any emailed statements.
They could send messages to your friends and family from your account asking them to send money to you in a certain way or donate money to a "charity", hiding the entire conversation from your view.
They could make some services you use slower or less reliable in subtle ways, to steer you towards the ones they want you to use— the ones that are easier for them to manipulate the traffic to/from.
They could make you think you're going insane, in any one of a variety of ways.
They could gather all of your private information, and then lock you out of your entire digital life all at once. Two-factor authentication wouldn't protect you; they could present you with a fake "re-confirm your settings" process to collect the information necessary to disable or replace the settings. (If you pay your rent using your phone, they could lock you out of your physical life too; they could prevent the payment from going through, show you a confirmation, and suppress notifications of unpaid rent and e-mails from your landlord.)
They could control which news you see, slowly shifting your views on things like privacy and security.
If you get suspicious about any of this, they could plant false information in your search results.
No that is absurd, this is standard practice for almost all enterprise setups, as a central component in the deep packet inspection solutions used in network security. So the chance of it being illegal as the user has agreed to participate is absurd. The only thing that would make it so would be to lie about the functionality and intrusiveness.
Remember that in the case of the Facebook VPN, they were asking teens to install it -- which means the teens involved (according to US law) are a protected class: under adult age. Under age folks can't legally agree to participate in anything.
Keep in mind, just because everyone does it doesn't mean squat for is it legal or not. It means it hasn't been tested in the legal system, such that a judgement has been passed regarding legality.
If somehow, the practice were found to be a key part of circumventing some large industry's means of controlling something, it could realistically end up becoming illegal once subject to legal scrutiny.
I think the parent is implying that HTTPS encryption is a form of DRM. It's a reasonable question because AFAIK, the DMCA doesn't specifically call out DRM but rather any security/protection measures, so I can see how HTTPS would fall into that.
(However, upthread comments indicate that using certificates to see HTTPS traffic is a fairly common practice in enterprise setups)
Duh ;). FB isn't the only one... So many do this, your head would spin twice and it wouldn't even hurt. People need to see the bigger picture. FB is just one example.
How is this not in violation of most wiretapping laws? Facebook is not the common carrier in these cases. Both parties of conversations with teens are not consenting to the wiretapping, which is not allowed in many US states. I’m not sure teenage consent is considered “consent” and the parents aren’t a party to the conversations Facebook is wiretapping. Facebook is both paying people and recording the electronic communications... So how the hell is this legal under current laws?
>How is this not in violation of most wiretapping laws?
This is, perhaps, the most apt question to take away from this. If an individual did this, even with an EULA, that would be a fast-tracked way for them to see the inside of a penitentiary in almost any country, yeah?
im sure a disclaimer is in the eula (boo).. but if it were deemed illegal, i suspect fb would just pay the fines, or more likely, be given time to come into compliance with the law..
FB isn’t the service provider, they’re intercepting and recording private conversations they aren’t a party to... where not all the parties are consenting, and sometimes the party is not of age to consent... If it’s criminal, why should they be given “time”?
i agree its not justice, but the answer is because they have power. you could download the vpn and then try to press charges. hope you have lots of money.
A 19 year old can. If he access private discussion with his 13 year old brother, indirectly the study will gather data of the kid. RGPD don’t care if you gathered data directly or inderictly, you are liable.
It's not quite that cut and dry in the UK[0], and the article mentions that parental consent was required.
But if they were operating in the UK, I'm somewhat doubtful their disclosures as reported in the article could be classed as informed consent under GDPR given the "specific protection" children are provided [1].
Has this been used in Europe, as well? If yes, can someone affected please excercise their GDPR rights and ask for information and access to personal data?
The fact that this exists makes me uncomfortable, but I'm having trouble pinpointing a reason why it's bad. People are opting into the data collection. Perhaps they don't know the full extent of what Facebook is tracking, but sideloading apps on iOS is not a one-tap process—anyone who used this had a sense of what they were doing.
And, $20 per month is pretty substantial compensation.
The way that Facebook is bypassing Apple's rules feels shady, but I've always felt those rules were user-hostile to begin with. I firmly believe that users should have control over their own devices, and that means letting users give information to companies if they so choose—especially if they're being financially compensated.
Recently, there was an HN thread of a Chinese man who sold his kidney for an iPhone at 17 years of age and 8 years later was bedridden for life because his remaining kidney failed.[1]
You could say the people who bought his kidney for an iPhone did nothing wrong. The kid had control over his own body and they made a deal the kid thought was good.
I think, though, that he wasn't properly educated of the risks that doing such a trade would leave him with, and that the people who offered him the deal very well knew them, but targeted him for being a naive child who wouldn't take them seriously.
I think this is the same case. People just don't understand or don't take the risks of this seriously enough, and companies like Facebook take advantage of that.
This is actually a great comparison, because I think both boil down to (a lack of?) informed consent. I think there's nothing inherently wrong with selling all your data for $20 a month. If you know exactly what you're doing.
Very likely the kid didn't understand exactly all the risks and wasn't informed when selling his kidney. And I would guess it's the same thing for a lot of kids signing up for this facebook thingy thinking "free $20 a month? Let's fucking goooooooo!"
But I honestly feel just as creeped out by apple dictating what people can install on their phones and what they can't.
The danger is judging informed consent by what we would find acceptable. It's conceivable that one could understand the risks yet not appreciate the probability involved, or have a value system skewed towards present gratification. The point is that the concept of informed consent in cases like this don't really add much information.
You're allowed to set your value system however you want, that's not my business. My issue is just that there is no details on their server-side security, and that the terms used to describe the effect of the root certificate are not nearly strong enough. But if someone truly is informed about who is collecting this then their's nothing wrong, other than FB violating their agreement with Apple which I trust Apple is seriously chewing them out for.
I think there's nothing inherently wrong with selling all your data for $20 a month. If you know exactly what you're doing.
Informed consent is _impossible_ here because there is no way a person can know what future use the data will be put, and Facebook sure as shit ain't gonna tell, and they're even less likely to limit their future use of the data through an agreement made in the present.
See, I felt like it was not a great comparison because a body part is a very different thing from web traffic. Since Facebook does not stand to gain anything from inflicting material harm on you, the expected value of giving them your data is at worst very slightly negative. They’re likely not even recording most of it. (What would they do with it?) The same can emphatically not be said of a mission critical singly-redundant organ.
They can’t run this program on a societal level because it would cost too much (>100B a year in the US alone, which is surely more than the value they derive from it). So we really only need to be concerned about whether this hurts the individuals involved, and whether Facebook plans to do anything unethical with the data.
You're implying (perhaps unintentionally) that selling an organ is equivalent to selling information on what websites you visit. There's a reason selling organs is illegal, at least in the US.
I don't think personal information is so valuable that we need to outlaw its sale.
I’m led to believe a whole bunch of intellectual heavy weights agree that privacy is so important it needs to be heavily regulated fairly immediately.
I tend to agree with them.
Come to think of it, is there anyone making a strong case for weaker privacy protection? I’m prepared to put aside my existing assumptions long enough to read an article or two.
Impingement on quality of life by the seller of an organ can be, and is translated to a value statement. However, the paradoxical nature of human beings which relies largely on emotional responses dictates that while we can have people starving in the streets, because it would make society squeamish, they can't be allowed to sell their organs, for the sake of their own human dignity and quality of life, never mind that society stripped them of both anyway.
This isn't paradoxical at all, when both options are horrific there is little benefit to cover up one with the other and claim a solution.
Furthermore a short term windfall from selling organs will not provide the skills or assets required to prevent long term starvation and homelessness; so now the horror has been compounded: homeless, starving, and prone to debilitating illness.
The paradoxical nature is that being homeless and starving is horrifying but something humans have become accustomed to, and yet the idea of selling organs is embedded culturally as being shocking, even if the end result of being starving and homeless is relatively equivalent to being desperate enough to sell organs, so it's not allowed.
Also, how you mention as fact that money from selling organs won't prevent long term starvation and homelessness is surprising; you don't know that's the case one way or another but you're trying to pass off an opinion as fact. I can't argue either that it would help with a clear and definite metric, but the correlation of living conditions and money is clear. How real-life application of such legalization or organ selling, with quality of surgery and post-surgery care, legal predatory practices, and other factors are dealt with are potential problems, but those are issues that exist in all commercial domains.
I also never argued organ selling as a solution to a problem; as others mentioned, it is a potential band-aid to a problem that lies in wealth inequalities, but which I find interesting as a societal flashpoint that show how knee-jerk emotional responses can cause logical paradoxes.
Impingement on quality of life by the seller of an organ can be, and is translated to a value statement.
How much is a Jew's life worth vs a Christian's life vs a Muslim's life? Or would it make you squeamish to try and adjudicate that? Or are some things not worth putting a price on because of principles, because breaking those principles would have worse second and third order effects?
Well, the market as an aggregate dictates that. It's entirely reasonable to assume that from market to market, buyers have different value assessments for people of different religions, and place different values on each. On a personal level, I would pass judgment by objective quality of organ, but that's beside the point. The idea of employment already places value statements on people and mainstream moral perceptions mark discrete points that shift from moral to immoral, when in fact its all on a continuous scale and the point at which it shifts is arbitrary.
> equivalent to selling information on what websites you visit.
You make it sound like they're just obtaining a list of URLs, but that's not it. For $20, they get to impersonate you while you're in the VPN and after you leave (they have all your passwords and session cookies). They can also impersonate anyone you deal with. They have all decrypted information going between you and the rest of the internet. Not even your ISP gets that amount.
Even further problematic is the scale at which they can do this. This isn't just a concern we should see as individuals but also as a group. They can control a grand portion of information flow and authentication in the whole web.
No, the point is that under GDPR you can't sell personal data because it remains the property of the person it was shared by. You can sell access to that data, but the purchaser inherits the terms under which that data was shared and can't use it for any purposes the owner did not already agree to -- plus that transfer itself has to be consensual.
Also under GDPR the consent can be revoked at any point and the data has to be deleted. Plus the owner has to be given exhaustive information about what data was gathered, what basis it was gathered on and who it was passed on to (recursively).
It comes down to informed consent. But that's a whole can of worms.
One which needs to be opened and we need to sift through to find an acceptable answer. It is difficult to find something that allows freedom of choice and doesn't require a huge amount of knowledge. Plus, who decides what the facts are? The required minimum knowledge? How do you measure understanding? Are we going to have a ministry of truth? If so, who watches the watchmen?
> This is not sideloading; this is enterprise app distribution. Users are not self-signing this app.
Just to be clear, that's the process I was referring to. I consider it a form of side loading, because the app is coming from an unofficial, non-Apple source.
Enterprise apps won't run until you manually go into settings and certify that you trust the developer. Far from the most onerous of tasks, to be sure, but significantly more involved than tapping a download button. I don't see how someone could be "duped" into running an enterprise app.
They weren't duped into running it, i.e. they knew they were downloading and installing an app from Facebook Research. They likely did not understand the effect of installing the VPN and trusted root provided by the app.
> They likely did not understand the effect of installing the VPN and trusted root provided by the app.
I'm just not convinced on this point. I think it's likely a lot of people did understand that Facebook could see all their internet traffic, and thought for $20 it was a fair trade. There's a HN user down thread (anonymous5133) who says he used the app and quite liked the exchange.
Now, it's possible these users did not think through all the consequences that sending this data to Facebook could have—but just how much responsibility does Facebook have here? Does Facebook need to say in big red type, "This data could be given to health insurance companies some day and used to deny coverage?" (I'm not even clear if that would be legal, but I bring it up as an oft-cited nightmare scenario.)
Really? I think I opened my first bank account when I was 10 or so. My parents had to co-sign but other than that were not involved in it. I don't think this was very unusual, the bank even promoted a special free account for young people. I had summer jobs from the age of 12, would you expect me to shove the money under my mattress?
Well your parents surely would be pretty involved with that, if their 12 year old son needs a ride down to the bank. It's not something that's going to be slipping under their noses.
Which brings us back to you falsely claiming anyone but you uttered that phrase, and your implied assertion that there is something wrong or odd with looking out for others, including children.
Many of the things they are spying on aren't just the business of the person who's getting the $20 either. If you message someone, do you expect them to turn around and sell that to Facebook? If you are one of Facebook's competitors, don't you deserve to not have them use their monopolistic power to scrape your data?
If there had been a hack of the data collected by this program, people could very easily and rightly have been fired or disciplined for exposing corporate data through a VPN. Even if most things are run through a corporate VPN or intranet, one sign in to your work account on Office 365 is game over for the company.
Whether or not you agree with Apple's rules, the fact that Facebook is willing to violate their Enterprise Certificate agreement with Apple is a red flag.
If Facebook is willing to break an agreement with one of the largest corporations on the planet, what reason is there to think that they will keep any promises they make to individual users?
> I'm having trouble pinpointing a reason why it's bad … $20 per month is pretty substantial compensation
But Facebook's competitors did not consent to their traffic being spied on and had a reasonable expectation that their traffic would remain safe from this type of intervention on iOS devices. Ignoring the ethics of paying users for data and so on, this seems like a straight up case of industrial espionage. The article says that this is how FB spotted the rise of WhatsApp, and presumably informed the offer. They would have known exact usage information, this is espionage via surveillance.
Why do Facebook's competitors get a say in it? Facebook is only getting access to user-visible data; if the users decide to give that away (in exchange for compensation), it's their right. Amazon users aren't Amazon employees—they didn't sign an NDA and are free to share what they know.
At an individual level that may be so, but if they're doing it en-masse then they essentially had secret/sensitive competitor information when they made an offer to buy WA, daily active users, usage-by-demographic etc.
On top of that they would have had access to message formats, headers, encryption protocols and so on, things that are potentially trade secrets. This isn't user-visible data and app owners shouldn't expect that competitors can access it directly from a user's phone.
Those two and the lack of a robust, documented framework for Facebook to delete the data it is not interested in analyzing and store what it is securely.
The biggest question I have is why they needed the root to analyze popularity of future competitors. Surely doing domain requested (visible even with TLS) and number/time of requests would be sufficient, and that would have greatly reduced the amount of private data gathered.
People are opting into data collection in a way that creates a severe security vulnerability. Not only have you given up confidentiality (of everything, including passwords), you've also sacrificed integrity and availability. You can no longer trust anything you do on your phone.
A hypothetical experiment Facebook might be interested in conducting: Do people use Facebook more if Twitter is slow and/or unreliable?
I recently came across a similar market research effort in Switzerland [1] after I noticed the VPN symbol in the status bar on a relative's iPhone when showing them something. I asked about why they (not very tech-savy otherwise) were using a VPN and was told they were participating in a market research project in exchange for some shopping gift cards. As is the case with FB, the research company installs a VPN and their own root certificate.
Of course the implications are outlined in the fine print / data protection agreement when signing up, but I doubt most of the participants are aware of just how far the data collection they enable with this goes...
Reading their FAQ they nicely pack what's going on in flowery language e.g. "Is the Swiss Media Software a Virus or Spyware?"
The Swiss Media Software is not a Virus and also not Spyware; it is not malicious and does not do harm to your computer, phone or tablet. The Swiss Media Software only observes the behaviour of Internet users that they have approved (this last sentence could be a bad translation by me).
That said the companies behind it; Net-Metrix and Intervista, are basically harmless - they produce consumer studies and are something like the "Nielsen" of Switzerland. The bigger risk here IMO is they themselves get hacked - knowing a little about Net-Metrix for example, I doubt they have the resources to properly protect their infrastructure.
I feel like the translation overstates the fine-grainedness of the consent a bit, it's more along the lines of "Swiss Media Panel is a consent-based [i.e. the user has consented to having the application running on their device] application that tracks the behavior of internet users"
Security and also how far they actually go in separating the tracked data from your demographic & potentially personally identifiable data is definitely a concern, next to the obvious issue of how informed one can consider the consent they get from their users...
Seems to me like the law needs to be clearer about how to inform users in cases like these. Surely, it's deceptive to tell someone they're getting paid for installing a "market reasearch" app which actually records all online activity. Charging companies, who knowingly deceive users like this, with fraud sounds reasonable to me.
I agree that the law should probably be changed, but for slightly different reasons.
They are clearly informed that the app will track information regarding their online activities, device usage behavior and applications they use.
I think the main issue is that users without a tech background are just not aware of the full implications of allowing a third party to collect this kind of data, even decrypting their HTTPS traffic and tracking everything they do online.
The statement by Strafach in the original article sums it up quite nicely:
“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”
You can’t be clearly informed and not aware at the same time.
Which makes this fraud, right?
In the same way automotive manufacturers are held accountable even if their was no intention to cause harm, the software industry needs to be held accountable.
We need to have professional organisations, and government regulators, working to ensure some kind of general industry best practice, where software developers can initially start getting tapped on the shoulder, then given a series of rapidly increasing penalties until the industry gets the point that it can’t keep making out it’s the wild wild west.
And this is why I don’t believe software development is a proper serious profession. The proper professions, here in Australia at least, are granted the authority to witness statutory declarations. I can go to a qualified vet, doctor, engineer, chiropractor(!), police officer, school teacher, postal worker, the list goes on[1], because these professions have a chain of trust.
And yet we trust(?) software developers and their employees with our most sensitive data!
I guess I don't understand what is so deceptive about this? At least not anymore than what Google does for example?
Do we really think computer illiterate people know that Google can infer a huge about of sensitive information about their end-users without them ever ticking "i accept" or signing up for an account?
At least with this they have to take explicit actions like accepting the terms and installing the tracker before they're tracked. They even get compensated for it.
In terms of a newspaper..
In Google's case they might just get the headlines of what you're doing, in this case the company/Facebook gets every single word, space, and punctuation.
In an odd twist, this is precisely what some tech critics have been pushing for—for consumers to be monetarily compensated for the data they're giving up to tech giants:
> “The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”
I can easily imagine being poor enough to engage into this contract, despite knowing exactly what power this gives Facebook. Don't confuse desperation for ignorance.
Could they've achieved this another way? Still getting all the metadata for market research without a root certificate? If not, the step wasn't that appalling.
Compensating people for information on their behaviour is nothing new. If you participate in a program to report daily purchases you probably give away as many information and yet it's not viewed as controversial. The fact that Facebook doesn't have a great track record is problematic but generally, I don't see a big issue.
I agree that it's ethically dubious, and wasn't trying to defend the move by any means.
However, I _would_ contend with the assertion that "there is no good way to articulate just how much power is handed to Facebook when you do this." Sure there is—just not one that would look good for Facebook.
Because it’s not transitive. If I send you a personal message, I didn’t consent to you giving that to Facebook who will use my phone number to add it to the shadow profile they keep on me. Facebook might as well pay in pieces of silver.
But you can still get the domain/subdomain requested, which should be enough to find out service used and time/frequency of use. That will change with encrypted SNI (https://blog.cloudflare.com/encrypted-sni/), which is not yet mainstream, but large services Facebook is tracking have dedicated IP pools so that's not a barrier.
So for the reported use case, "hey, tiktok looks good, let's find out how many people use it before we buy it out," it would seem that non-MITM would be plenty (and technically easier/lower resource to do, VPN could be kept on device and the pre-anonymized data sent up to the cloud, saving them server costs and bandwidth.
That may be true for children (who should need parental consent which Facebook apparently asked for). But just because someone doesn't have a deep understanding of tech doesn't mean they can't consent. You can't exclude people from signing an agreement just because they don't have a CS degree.
I think the common position is "Instagram should pay you for your photos" and "google translate should pay people for exploiting their translations", not "big companies should track everything you do and pay you a small amount of compensation".
Being opt-in and getting compensated are the two things I've seen people want from usage of their data. No one should have an issue with this since it does both.
I harbor serious doubts that most of the 'volunteers' here know exactly what it is they're providing -- the sign-up sheet probably didn't say "we will know very specifically your porn-watching habits" e.g.
I think that this is a fairly common when it comes to technology. The terms and conditions seem reasonable ("we collect some data to provide more relevant ads"), but when you look a bit more closely they build a personal file that contains who you communicate (email/text/call) with and how often, where you go, what you buy, which websites you visit, which videos you watch, etc to the extent that they are able. My mother is very smart, well-educated (she has a PhD), and relatively tech savvy (she works in scientific computing), but she was still floored when I told her about some of the tracking Facebook and Google perform. Google recording her location (which she technically agreed to, but did not realize) was enough that she asked me to help her migrate away from Gmail. She probably would have managed without my assistance, but the barrier would have been much higher.
"We will hold logs of you saying awful things to your girlfriend as you're breaking up in a file on you for the next 50 years" is more accurate. Privacy nihilism comes either from a lack of imagination, or a lack of perceived power.
It was open to adults as well, but yes. However, they had no duty to specifically enumerate that particular case anyway lol. And technically that is a crime under US law to distribute porn to minors, but its not a crime for the users to view it, so if they connect to a website intended for and operating in another country without such laws, there is no legal issue.
While I'm generally all for opt-in and free decision making I think some lines should only be crossed in special circumstances. Similarly to medical procedures that are only legal if the patient is very clearly informed about all potential risks (and that includes even those risks that are really not that probable) by an actual human being and not by just clicking on a button. In the case of Facebook they would in my opinion need to state very clearly that there is an albeit small risk of a breach and all collected data could be made public ("for example you employer might suddenly know which porn websites you are visiting or what people you have googled")
Big opt-ins require big explaining because people can only truly make free decisions if there is an actual effort to inform them about what is happening.
Edit: so maybe this is a bit extreme because I realize that this might similarly apply to (for example) phone manufacturers. I still think that actually analysing the traffic is a bigger risk than simply providing the phone/browser to generate the traffic because of the centralized target that is Facebook.
Regardless of whether Facebook was also trying to deceive users specifically—which we'll never know—they likely wanted to deceive Apple. I'm not going to blame any developer for attempting to bypass Apple's stupid restrictions.
Using intermediaries also allowed Facebook to technically not violate Apple's enterprise certificate contract (because the intermediaries were in violation instead).
> Using intermediaries also allowed Facebook to technically not violate Apple's enterprise certificate contract (because the intermediaries were in violation instead).
I actually though they would have done that, but it used the regular "iPhone Distribution: Facebook, Inc. (In-House)" cert, they didn't even create a shell entity and get a new one. Reports say Apple has revoked this cert, breaking all internal (legitimate) apps and possibly creating quite a bit of chaos for internal ops.{1} Their separate Apple Developer Program organization account, used to deploy TestFlight public and private betas and App Store apps, as well as local deployment to a small number of devices without Apple involvement for development testing, is not affected.
The intermediaries may or may not face consequences if they have separate agreements with Apple, but they did not use any Apple products to do their part and have not violated anything with Apple.
This is a massive overreach. I would be pretty shocked if the people involved in this "research program" truly understood just how much access to their private data they were granting Facebook.
Maybe there wouldn't be an issue if they were being 100% transparent and explicit about what information they are collecting and how it will be used. However, the article seems to paint a fairly compelling picture that FB is not acting in good faith.
The fact that they're targeting kids makes it that much more unethical.
It depends very much on what users are told they are signing up for. The ad in the article says a "paid social media research study", which couldn't be more vague compared to the level of access Facebook are granted through the root certificate.
Plus, the deliberate targeting of children that won't know better. And asking people to upload their Amazon order history! Pretty scummy.
> No one should have an issue with this since it does both.
Surely there's something to be said about age. There's a reason 14-year-olds can't enter into a legally binding contract.
Besides this, there's also the issue of how clear it is that the app is collecting private data. The article says:
"Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using."
which seems a lot like Facebook luring users into giving them their data without the users' knowledge.
Yeah.. these are minors though. It may not be illegal explicitly but the fact that they are minors feels different than the perspective of an enlightened bargain.
The group of people using this "App" by this can be broken up into 2 main groups:
1. Those who understand what they are signing away and need $20/mo more than they need privacy
2. Those who don't understand or don't understand fully what they are signing away and see it as free money
Preying on either group is disgusting and wrong. I'm really interested to see what Apple does here, they have taken a hard line on privacy and I don't doubt they will kill this app but if FB wants to play wack-a-mole they WILL win (see iOS sideloading scene), for me the big question is will Apple take down the FB apps?
We've seen Netflix, Uber, FB, Amazon, and more skirt the rules of the App Store in the past, they've barely gotten a slack on the wrist (in public at least). At what point does Apple take a real stand and say no? Cause so far $$$$$ has ALWAYS stopped them, I really do believe they care about privacy, I don't know know if the shareholders do.
Since they were also targeting children, 13 yo and up, they probably fit into both categories and is extra unethical...
As far as I know, when Apple discovered Uber doing some shady, but way less messed up things, they were flat out threatened to be kicked out. Problem is, this isn't Facebook's first rodeo, their previous app that did this was kicked off.
> For some people this money could be incredibly important.
That's exactly the problem. In Human Subjects Research this might be considered a violation of Informed Consent in the form of undue influence. From the Belmont Report [1]:
An agreement to participate in research constitutes a valid consent only if voluntarily given. This element of informed consent requires conditions free of coercion and undue influence. Coercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance. Undue influence, by contrast, occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance. Also, inducements that would ordinarily be acceptable may become undue influences if the subject is especially vulnerable.
Note the "especially vulnerable" part at the end there.
> > For some people this money could be incredibly important.
> That's exactly the problem.
I thoroughly disagree, and I feel like speaking up about this particular philosophy of consent.
If I buy a used iPhone for $100 from someone who would die if they didn't get the $100, have I acted unethically? Whereas if I bought it from someone who didn't really need the $100, I wouldn't be acting unethically?
This sounds not only wrong, but highly counter-productive to me, since the consequence of not entering into this trade, just because the seller really needs the money, is that the seller dies. How does that make any of us better off?
As a society, we should encourage trading with people who really need the money, not label it as unethical. Whether a trade is unethical or not can be determined solely from the trade itself, not how much either (or both of the parties) needs the proceeds from the trade.
Example illustrating the absurdity: imagine two people who both really need the proceeds trading with each other. Ouch! According to your philosophy, they are both acting unethically (when in fact they are doing the only reasonable thing).
> If I buy a used iPhone for $100 from someone who would die if they didn't get the $100, have I acted unethically?
In some cases, you have clearly acted unethically. For instance, if the iPhone is worth $800 and you have more money, but you're getting the $100 price because the man is dying now and there's nobody else around to offer him more than $100.
Doesn’t this argument give us child labour and sweat shops? Fortunately most places have laws that set a minimum standard to protect society against those who have lower ethical standards.
So say I don't buy the iPhone because I consider it unethical to pay only $100, but since I don't actually need a new phone, I'm not going to pay $200+. The seller dies because they were $100 short of some essential medicine they needed, or whatever. Is this really the outcome you want to see?
In an ideal world, I would just pay the person $100 and not take their phone—but, c'mon, this isn't the world we're living in. People die every day in the US—never mind the rest of the world—because they couldn't afford medicine/shelter/food/etc
You're contriving a situation where the seller doesn't have any other options AND you also don't have any ability to buy it to later sell at a profit (which would give the seller the ability to negotiate a better price than $100 with you while still allowing you a reasonable profit when you sell it).
I agree you can contrive a situation where the best ethical option is to pay the seller $100 for the phone but you really have to work on it (and the situation is pretty contrived to begin with)
And if I have enough money, work in tech and still value $20 more than the (additional) data I give up? Especially knowing what's already collected the difference isn't necessarily that big. Am I not allowed to give consent then?
Are we, as a society, ok with people being desperate enough to need $20 more than privacy?
Or maybe privacy isn't something we should care about or at least value as much as we do as a society. Maybe I'm wrong. I think I see the dangers down the road but maybe it's just a mirage and privacy will die and it won't be used against use by people in power or with money.
That's kind of what I was trying to get at, probably unsuccessfully. That maybe the anger shouldn't lie with FB or the fact this program exists but with the fact we are in place that people will accept that little for so much.
This is like saying child labor is a band-aid, not a solution, but that we shouldn't ban it because it helps children. The whole point is that it's an inherently exploitative thing that is a net negative for society even with people who need money getting paid.
Right, because installing spyware is the same thing as forcing young kids to work. What?
No questions, this is creepy. But nobody was talking about a ban. The post I replied to asked whether we should be okay with people who need $20 that badly.
The answer is no, we shouldn't be ok with it. But you're not solving the situation by banning this, you're making it worse if anything.
"We shouldn't be ok with people being homeless." "Okay, let's make 'being homeless' illegal. Problem solved!" "???"
And yes, this logic has been used before. It hasn't solved homelessness, btw.
We shouldn't be ok with people needing money that badly, and we also shouldn't be ok with creating an economic dependency on those people selling their privacy, the same way we've banned economic dependency on child labor. That is not at all equivalent to banning homelessness. It's more like banning hiring homeless people to fight lions with their bare hands for entertainment.
I'm not attacking them for taking it, I harbor no ill will toward either group #1 or #2 of my original comment. I'm asking if we are ok with this being necessary in the first place. I'm not ok with it.
I have the app installed on my phone. I have it installed because I want the $20 amazon. I don't know if I really "need" the $20 amazon but it is 100% passive once it is installed. You literally need to do nothing. Every month they send you $20. I would not uninstall it even given the privacy concerns.
If you guys are so concerned about it then create something that puts cash in my pocket. I'll gladly run whatever app you want on my phone if you pay me.
nobody is judging the desperate person, we are judging the net effect of the rest of sociiety on them.
it's provably affordable to give everyone the average rent of the world which covers housing (rent of buildings), food (rent of farmland), energy (rent of space used for solar panels, windmills, ...), natural resources (rent of mines).
Perhaps. I notice in most societies we say there are all sorts of things that you can’t do for money with your own time and body, however desperate you may be. Selling your organs and selling sex being examples.
> and sex workers being abused by pimps and corrupt cops
I don't have a strong opinion either way, but my understanding is that it's very very far from proven that legalizing prostitution improves the lot of sex workers -- I am led to believe that trafficking becomes _more_ of a problem in localities where sex work is legal.
Further, and again no strong personal opinion on the matter, but I suspect you'd see a huge rise in coerced organ selling if it became legalized.
These are questions societies need to answer for themselves, and my central point was that there's already precedent for societies deciding that they don't benefit when some things are available for sale, even if an individual in the moment says they want to sell it.
I don't have a good answer for you. My bad answer is I trust Google/Microsoft a little more when they say they aren't looking at your emails than I trust FB with Everything you do on your phone and no guarantee of what they are doing with it.
I probably misunderstood you. From the very beginning of Gmail, Google has been very upfront that they do look at your emails, each and every one of them. They use automation for that, is that what you meant? No fleshy humans physically reading with their own eyeballs?
I mean, do you think we should outlaw strippers? They sure give up more privacy than this for money. What about other things people sacrifice for money? Coal miners? Crab fisherman?
I agreed with TheSpiceIsLife, I don't see how those compare. Seeing someone naked is so completely different than having full access to what they do on their smart phone. In fact, on their smart phone there are probably naked pictures.
> For some people this money could be incredibly important.
So I took this to mean "desperate enough" as in there are scales/levels of "desperate"-ness. Maybe desperate is the wrong word and the "enough" modifier wasn't obvious in my meaning.
Maybe a better way to put it:
Are we, as a society, ok with people needing $20 more than their privacy?
I was trying to convey that I imagine I would have to be pretty desperate to give up my privacy for $20/mo.
First of all I doubt that most people who go into this have a feeling of desperation, especially not the teenagers that are targeted. (I do sort of have an issue with the targeting, though when I think about it, I bet that teenagers actually understand what they are giving up better than 50 year olds). So I would rephrase that as "are we, as a society, ok with people _wanting_ $20 more than privacy", to which my answer is yes. I guess I would have a problem if it were desperation, but then I don't think that in this sense being "desperate" for $20 off a smart phone plan is correct usage of the term.
The app in question monitors all private communication that you have with others, who most certainly did not consent to have their own privacy taken away. So no, it's not okay to steal someone else's private information and put it on sale, however desperate you may be.
It's even more unethical to encourage people to do so, like FB did.
What if you simply don't care if some researchers have access to your data?
I'd honestly consider doing this myself, even though I am a highly paid software engineer, because it really does sound like "free money".
Although I probably won't, because I don't want to go through the hassle of sideloading an app on my phone (but if it was a 1 click thing, I'd seriously consider it).
You are leaving out group three. Those who want the money and will put it on a spare phone and game Facebook by using the phone for nonsense that has little to do with their personal life. As a similar example, just because I use Facebook doesn't mean I click "like" on things I actually believe in. Or click on ads of products I would be interested in. Quite the opposite.
This is fair, I know when I was younger we loved finding ways to game systems like this but we never were playing with the fire that is a monitored VPN. Yes a separate phone solves most the issues but this pales in comparison to shit like clicking on ads for pennies in high school to get paid.
I didn't mean to imply that group 3 includes kids. The targeting to kids I consider unethical. I agree with you that a monitored VPN is not something someone under legal adult age should be considered capable of consenting to. In fact I would say it's one of those cases that even the parents can't consent to. It might be comparable to a parent consenting to their child's phone line being tapped by a third party for a monthly payment. Clearly unethical, and probably illegal in a lot of places.
> Facebook first got into the data-sniffing business when it acquired Onavo [..] to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger [...] and to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup
This makes a lot more sense now. At that time the tech sphere was surprised at the price tag which is expected as people outside Fb perhaps didn't have these metrics.
Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them? It would be FB calling Apple's bluff. Users would riot though. Apple has to decide between supporting privacy or supporting their users' choice. Apple has made the decision to take away users' choice in the past in the name of safety/protection, I could see them doing it again.
I can bet this is the LAST thing they want to see in the headlines. It forces them to address it, maybe they have a plan ready to go for this eventuality, a whole PR push and I kind of hope they do. If they don't they either look weak on privacy or have to roll out some half-baked plan/proposal/nebulous idea on how to protect users privacy better in iOS 13 or something like that.
Right now Apple is doing a whole hell of a lot of taking out of both sides of it's mouth and I understand it's a hard line to walk, I'm not saying I could do it better. FB's practices in general are probably an affront to Apple in general but skirting Apple's limitations to piss all over privacy and essentially turn an iPhone into an Android-level of data collection, I can imagine Apple is PISSED. I just really hope they had something planned for this day.
> Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them?
Enterprise developer accounts (the ones that can issue apps signed such that they can be sideloaded on any device) aren't something just anyone can go online and sign up for-- they require manual approval with proof of a business's identity before they're created.
So, unless Facebook starts opening well-disguised shell companies or something along those lines to circumvent any restrictions Apple might put on them, this will be over as soon as Apple revokes Facebook's enterprise distribution account. (Or, more likely, threatens Facebook into dropping the VPN app, because FB probably doesn't want to lose the ability to distribute legitimate internal-use apps to their employees.)
> Enterprise developer accounts (the ones that can issue apps signed such that they can be sideloaded on any device) aren't something just anyone can go online and sign up for-- they require manual approval with proof of a business's identity before they're created.
It's my understanding that faking these business identities is the entire business model of iOS sideloaded services (see the subreddit for examples [0]) so I don't think it's that difficult to do. That said, I'd be shocked if Apple let them go that far as to keep spinning out fake businesses but then again if FB thinks it can get away with it what's stopping them?
That would be getting into serious fraud, arguably criminal under CFAA. Normally Apple isn't interested in prosecuting these people as its just sideloading, which is some minor copyright violations and a security risk in their view, but if Facebook did it after having been banned themselves, having a written statement from Apple that these apps were violating, and then they go and pay a third party or deliberately make a shell company to defraud Apple? That could provoke a total business embargo between the companies which would suffocate FB.
> Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
> Oh they could, I mean Apple could just keep killing their accounts they create but is FB ballsy enough to keep opening them? It would be FB calling Apple's bluff. Users would riot though. Apple has to decide between supporting privacy or supporting their users' choice. Apple has made the decision to take away users' choice in the past in the name of safety/protection, I could see them doing it again.
Apple could block all updates to Facebook's apps until Facebook complies with their policies. That would get Facebook's attention in a way that wouldn't alienate Apple's users.
Facebook needs mobile, which means they need Apple more than Apple needs them.
Now that's a really good point also I know Apple has straight up pulled apps (Tumblr) in the past. The don't uninstall them from the user's devices (though they could) but Apple does have a number of tools in it's toolbox.
1. Ban all accounts that were publishing this "VPN" (I assume FB didn't use it's main account for any of this, if they did leave that account alone and ban the others)
2. Block updates to FB for some period of time if they try to open new accounts and get caught
> Facebook needs mobile, which means they need Apple more than Apple needs them.
Is that actually true? Last time I checked iPhone only had a 20% market share. People buy phones, including iPhones, to do stuff with them. What Facebook provides is the stuff a huge part of the users want to do with their phone.
Imagine iPhone users can no longer WhatsApp/FB-messanger with their Android using friends. How many people will think twice before buying an iPhone again? Facebook screws with privacy the users don't care about (yes, the average user doesn't give a shit, especially if he gets paid), while Apple would screw with the users apps, which they care about a lot! Apple is in the disadvantage here. Especially since their whole business model is a better user experience for overpriced hardware.
>> Facebook needs mobile, which means they need Apple more than Apple needs them.
> Is that actually true? Last time I checked iPhone only had a 20% market share.
But it's a relatively premium market segment that Facebook can't afford to lose. If they cede it, they're taking a serious risk that a serious competitor could emerge on the platform that turns them into the next MySpace. That 20% could pull the rest of the market its way, since whatever they migrate to would likely be available on all platforms.
This isn't a far-fetched idea. It's basically what Facebook did with it's initial rollout exclusively to the Ivy League schools.
> Imagine iPhone users can no longer WhatsApp/FB-messanger with their Android using friends. How many people will think twice before buying an iPhone again?
That might have been true five years ago, but Facebook's products are much less compelling now, for a whole host of reasons. Cross-platform replacements would quickly emerge to fill the niches Facebook was driven out of. Many people would get mad about not having Facebook on their phone, but most of them would get over it. But others are already primed to abandon Facebook, they're just waiting for a push.
> Apple is in the disadvantage here.
No, Facebook is, since their dominance of social netoworking is so tenuous that they need to convince people to use spy-VPNs to stay on top of emerging competitors.
To the average user, this would look Apple was exerting control over devices that are rightfully owned by the user, in the same way it looked to people in the free software community during the advent of the app store and the locked down iPhone that needs to be jailbroken in order to do what you want with it. They would still want facebook, an app they'd been using for lots of minutes a day, and they'd be denied it.
It might actually get ordinary users interested in making it so Apple doesn't have total control of what can be installed on the phones they purchased. They won't see all the philosophy behind it that people in the free software community do, but it would point them in the right direction.
I mean it depends on what you mean by "possible". I'm pretty sure Apple has the technical capability to delete apps over the air. The more interesting question is what they're willing to do: are they willing to pull FB from every iPhone? I don't think so.
If nothing else they could issue an iOS update that permanently bans the trusted root certificate, erases all apps, and directly blacklists the Facebook certs. Heck, they could even block connections to facebook servers at the MAC layer if they wanted to. And they could require users to install it to get future updates. Of course, that would never happen.
Could Apple, in a software update, put Facebooks apps in a sandbox, or wrapper, or VM type thing? Not sure of the technical term, whereby the device would only return encrypted or dummy device data? So that regardless of what permissions FB app has been given, each time it goes to get that data the VM interrupts and asks the user.
Say, FB wants to get your location, even though FB has location permissions a pop up says "Facebook is attempting to find your location. Do you consent to sending your location to Facbook?" "Facebook is attempting to read the Names, Telephone Numbers and Addresses of everybody in your contact list. Do you consent to this?" every time facebook app makes the request?
I'd be a bad user experience, but Apple could say it cares about privacy and blame facebook.
Not really. It’s a hard pill to swallow for Apple that most iPhone users are active Instagram/Facebook/WhatsApp/Messenger users. If they took those apps down people would be super pissed. I know I’d be extremely annoyed.
It would be an interesting twist of irony if they did take them down and there was a massive backlash against Apple. I have a sneaking suspicion that the media and Twitterati are more up in arms about all this than the users themselves.
Reasonable action here would be to threaten to pull the affected apps off the store if Facebook doesn't react within a few weeks. I am very confident that Facebook would not take the amount of bad press this would create. Apple has a lot of leverage here, they don't need to just ban facebook apps outright.
I disagree. It’s a pretty equal symbiotic relationship IMO. I keep reading people saying “NO other developer would ever get away with this!” Yes they would- if they had 2.2 billion active users on their platform. This may shock a lot in the tech community, but for some companies the rules don’t apply. Ask Procter and Gamble why Wal-Mart gets better prices than a local grocery. I truly believe Apple needs Facebook at least as much as Facebook needs Apple.
I think you might be underestimating to what degree this behaviour actually violates the sort of standards that Apple sets out for its products. Gaining control over all information on your device, including the content of private messages of teenage users shouldn't fly. Preventing this sort of stuff is one of the reasons people pay a premium for apple products, and Cook has stressed this over the last few years.
On top of this you can add the fact that they basically shipped renamed onavo code, which was already banned from the app store, so this is de facto a violation of Apple's rules.
It's in the long term interest of Apple to not be soft on this stuff, it's not symbiotic.
If you pay Apple $99 a year you can install whatever you want on your own phone only. There are no restrictions on directly installing IPAs with Cydia Impactor or Xcode. You can actually do it for free, but only a few apps at a time and must renew every 7 days.
I could argue that Apple is just as responsible for this as Facebook. How can they claim to take privacy seriously if it’s clearly possible for bad actors to get around the rules multiple times! How is Apple any less implicated in this than Facebook was in the CA scandal when a bad actor violated its policies and posed as an academic research project to gain access to user data it then sold to third parties? If you are going to hold Facebook accountable for CA, why does Apple get a pass when it enables third parties access to my data?
> I could argue that Apple is just as responsible for this as Facebook.
And you'd be wrong.
> How can they claim to take privacy seriously if it’s clearly possible for bad actors to get around the rules multiple times!
That's BS. You might as well say: "how can they claim to take security seriously, if it's clearly possible for bad actors to find exploitable bugs in their products multiple times!"
Apple has a tough job, and it won't do it perfectly because no one can. It's bizarre to claim that it's excellent but not perfect performance somehow makes it guilty of the things it's trying to stop.
Probably not to have Facebook piss all over their privacy.
I suspect there's been enough revelations about FB practices that many users would support Apple if they blocked the main apps. For a temporary block anyway.
Much of the audience here is probably much more understanding of the situation and aligned with your (presumed) position, myself included.
Most of the general population probably neither understands nor cares that much if someone is watching what sites they visit or other basic privacy items and if you make them choose between privacy (especially privacy of others) and being able to post a picture of their lunch, many will choose the latter.
I have had a LOT of conversations with non technical people about this recently.
“My phone is listening to my conversations” is how it goes - people know this tracking is happening, they hate it and find it intensely creepy, they just don’t know the mechanism being used.
They might get into antitrust litigation if they revoked the Developer Certificate (public apps and public betas), which was not breached, rather than just the Enterprise (used here, and for legitimate employee apps). Apparently they have done the latter causing massive chaos; the former would be an absolute nuclear option.
No iPhone user could use any Facebook apps, anywhere in the world, which would make this story front page on every newspaper. Business could no longer manage their ads spots or use iOS devices for social media. They will likely be shocked at the unwarranted disruption, rightly blame Facebook for it, and cut their spend on ads. Both PR departments would be working full steam on a war of worlds, disrupting all other work. Numerous suits would be filed. Meanwhile, Facebook stock would crash, leading to numerous investor lawsuits, especially since Facebook clearly risked this by blatantly violating contracts. Institutional investors will cut losses and pull out, further driving the price down.
I'd love to see it happen. But Apple doesn't want to, and honestly can't be expected to, pull the nuclear option just as a punishment for this. They would incur massive PR and legal expenses in response.
Smarter, less disruptive move: block research and put all Facebook apps in some kind of security sandbox with the sole intent to slow down the experience.
At least at this point, antitrust regulation hasn't prevented Apple from removing apps from its store that violate its policies. I don't see how it would be any different if they created a penalty short of removal that's only applied to bad actors with a history of needing it. This is needed for more than just Facebook's stuff, it could also apply to trojan privacy invaders like the Weather Channel app.
They could slow walk updates as they do unusually thorough privacy audits, and perhaps even apply extra access restrictions (e.g. skewing location, forbid use of certain permissions, etc).
No, they can't, because there's still competition in the marketplace in the form of Android. Apple isn't a monopoly, if you don't like the App Store there's 20 other phones at your carrier you can buy instead.
Why is Josh Constine still covering Facebook at TechCrunch? Is there no accountability for journalists who totally failed us?
For context, he's the guy who was supposed to be covering Facebook over the last 10 years, but instead of hard hitting journalism, we got nothing more than press releases and pro-FB articles.
I think a lot has to do with media outlets like TechCrunch having an “oh shit” moment wherein they realize Facebook is taking their profit machine from them (advertising). He even admits as much in the Twitter comments where he posted this.
If people at this point doubt that traditional media is waging war against Facebook as a means of survival and masquerading as a bastion of privacy as a means to an ends they are willfully delusional. These organizations show much more intrusive ads to me than Facebook. Also they treat Twitter with kid gloves because Twitter is useful for them to gain a following and disseminate their posts. Twitter has also shown me much more politically motivated ads recently than Facebook has.
I have this app installed on my phone and I have chosen to have it installed by choice. I am getting paid $20 amazon per month to have it installed.
Why do I do this? Because I enjoy making side hustle money with my phones. This research app in particular is very useful to me because it is 100% passive. If you are concerned with privacy you can always just use a crap side phone to run the app.
Could you summarize what you believe you're giving up in exchange for $20/month?
(There's a thread about informed consent elsewhere in this discussion. I'd like to understand how informed you are about the risks associated with the app and certificate.)
Selling a kidney is NOT the same as telling companies about your behavior. Organ transplants are extremely risky and should only be done if there's no other alternative. The risk of death is very real in those cases. I don't think that's true when you install a Facebook VPN.
Some people value their personal data more than others. That's why no one should be forced into giving up their data (essentially what GDPR is for). But if they want, they can of course set a price for their data.
I'm not a law expert, but IMO FB should seriously lawyer up. I can say that this kind of misconduct would almost certainly end up in court in Germany. A 13 years old consent with a cryptic data protection policy is not legally binding and luring kids in need of protection with money to give up fundamental rights can be viewed as an act of non-physical abuse.
I agree. That can also be called "common law" in Germany. Contracts with people who are not entitled to sign contracts on their behalf are not considered legally binding.
In that sense, it shall be treated as if there hasn't been a contract at all. The process is purposefully designed to get a signed contract as fast as possible. The technology to make proper ID (Age) verification is available, but my understanding is, that it is not used by facebook and its partners.
COPPA doesn't require foolproof age verification IIRC, otherwise the regular Facebook app and nearly every app on the planet that just asks for a birth data with no verification would be illegal.
If you believe Facebook to have violated the law in your state, you can write your Attorney General and ask them to enforce the law or explain how Facebook is not in violation of the law.
Apple should pull their entire developer program account! This is insane abuse of either Enterprise signing or developer signing - it could be either depending on the setup process.
This all seems like a rerun of Uber - bad news on bad news compounding on bad news. Something is very rotten at Facebook and has been for a long time now.
I think the basic concept of what Facebook is as a company is just a rotten idea. This sort of monetized social network is a very dystopian concept and probably indicates a basic societal issue.
Facebook's record is the best argument for banning dual share classes: if you want to keep control of your company, you shouldn't be able to do that at the same time as selling most of it off. If you are running a public company, you should be accountable to your shareholders.
I disagree, those shareholders know what they're getting into when they buy the stock. I do agree with just about 95% of criticism directed at Facebook, so this isn't me defending them specifically. I just don't see why dual share classes should be forbidden as long as people are properly informed when purchasing non voting ones.
If I offered you 10% of my bakery's profits but told you explicitly that you'd have no voice in how I run it, it could be a good deal for both of us and we'd both know what we're getting into.
It's not about whether it's a "good deal" for the investors: it is that as a society allowing unaccountable business dudes to run roughshod over society isn't working out so well.
Agreed. I'm curious if FB just doesn't have to play by the same rules? It would seem like the loss of FB Research's Enterprise cert would be a major pain for them.
Facebook, like every other large company, has a separate set of rules, though for something like this I can't imagine Apple looking the other way. My expected outcome is that the current certificate is revokes, and Apple has a talk with their team, tells them to not do this, and allows them to request a new enterprise account.
If they ruin this for the rest of us, I will never forgive Facebook. I hope Apple comes down on them like a bag of bricks instead of f'king up my beta testing.
Hard to forgive Facebook for undermining democracy, mass behavioural manipulation, privacy violation, monopolistic practices, and a gazillion other things.
Maybe I am getting old but I am really becoming more and more skeptical of anything that's coming out of Silicon Valley. Everything Google, Facebook and other ad-supported companies produce seems to be designed to do something behind the backs of users. If the trends continue these companies will soon have world-wide full surveillance of almost everything which then will be perfect infrastructure for dictatorships. And with the growth expectations these companies will have to do more and more sleazy stuff to keep growing.
The problem is simple and relatively recent: advertising business models lead to unethical behaviour. That’s all. The worst companies are those most dependent on advertising.
This is hardly different from all those ad supported platforms we use every day. For privacy invasion, you get compensation. Usually you get to use a service free of charge, in this case (because the invasion is worse) you get financial compensation.
If you mind this, you should be honest with yourself and compare it to all the other deals you're striking with many services.
Let's put it this way, imagine if everything that you ever said in various businesses was recorded and publicly available. That's analogous to the "many services" you're referring to.
Now imagine if people were being offered the chance to get some gift cards in exchange for strapping a microphone to their face 24/7, regardless of location. That's analogous to what's happening here.
Anything you do, visit, etc, can be collected. Your bank app traffic, your location data that any app requests, the contents of your data voice calls over non-FB apps, etc.
But I don't see a fundamental difference between strapping a microphone over someone's mouth 24/7, and only strapping the mic on (or, more practically, turning it on) when the user uses certain applications. In both cases you're compensated and in the former we feel violated, and in the latter it's all fair game and business.
Reminds me of those old "dial up" Internet access offers that offered to pay you to surf the web. Being a broke college kid, my friends and I quickly learned we could game the system by installing software that moved our mouse and clicked randomly. Made for a pleasant and sometimes shocking surprise when waking up in the morning and checking to see where your PC navigated to the previous night.
I only got paid $31 for a month. Even as a kid, it wasn't worth the effort required due to their constant updates.
The real hidden gems were NetZero and K-Mart's BlueLight. Both were completely free dial-up internet providers, paid for by a banner ad program that was easy to hide with window killers.
Netzero went on to acquire BlueLight and many other free internet providers, and eventually turned into a paid internet service: https://www.mybluelight.com/
>I only got paid $31 for a month. Even as a kid, it wasn't worth the effort
My exp exactly.
>The real hidden gems were NetZero and K-Mart's BlueLight.
Yes and Yes! Used both, both were a giant pain but fun to mess around with. Now that I'm thinking about it, I'm not sure why I bothered as our family had dialup (I was probably just bored)
At what point does Apple pull Facebook's developer licenses? as people have mentioned, this appears to be a violation of the enterprise account program.
It's important to note that Facebook has at least two "licenses" here: one is used to push their apps to the App Store, and another to sign enterprise apps (like this one). Pulling one should not affect the other.
Well, this is the final straw for me. Facebook has been so repugnant lately that it's time to delete my account. Does anyone here recommend an alternative? Right now I'm considering Mastodon and MeWe.
EDIT: I ultimately went with MeWe because it's more user-friendly to non-tech people i.e. most of my relatives.
> Well, this is the final straw for me. Facebook has been so repugnant lately that it's time to delete my account.
Don't delete your account. Just delete all your posts and change your profile pic to something that tells everyone you've ditched Facebook. It'll continually remind everyone you've left and make Facebook seem more like a dying community to those who are still on it.
Then finally delete your account once it's as dead as MySpace.
Would joining FB as a Privacy Czar or whatever be a job one should rightly wish on a highly competent privacy person, or is it a new form of torture in Hades alongside Sisyphus and others?
A tiger can't change its stripes. Self-regulation doesn't work [1], they will bypass any rules they create for themselves whenever it's convenient to do so. The US needs to pass a GDPR-style privacy law.
I think you need to recognize that for many people body parts and privacy are two extreme ends of the spectrum which they are willing to part for money. Honestly, I don’t get what is the issue with users literally opting into (for money) this service by Facebook and be willing to share what they are doing on their mobile device with them.
Many replies on this topic remind me of that joke with the wife that sends her programmer husband to buy eggs. As if we're having some silly contest where the smallest ambiguity or contradiction in criticizing Facebook completely negates their aberrant and anti-social behaviour.
Obviously it's not the same thing, that was a list, a category of things which are in some way similar. If I include a dog, hippo and human in a list of creatures, would you complain that a human is not a dog?
The issue is, they are selling something which they can never get back for a pittance, and that something can come back and harm them at any point for the rest of their lives.
There is a reason privacy is a human right, its loss can deeply affect the lives of those that forfeit it or have it abused.
Many countries have codified the legal concept that persons can't sign away their rights, to avoid predatory contracts, like the one Facebook's bribing teens with $20 to accept. Any such contract is void.
Privacy is also a human right, as declared in the universal declaration of human rights.
How are they able to pay or contract with users under the age of 18? Or do they get parental consent. Not sure how that works. Referring to the 20/month, I assume under 18.
What is Facebook thinking?? Shouldn’t a company which is already getting bad PR for its handling of private data be extra careful about how much personal data it gathers and what it does with it?
The funny part is you always have those people that show up and say that we are "too hard" on Facebook, and that the NYTimes is just pushing too much and that it's just bad PR, not really bad actions.
And yet, after all this bad PR, Facebook keeps being a shitty company.
I mean Facebook does deserve the negative PR it's receiving, don't get me wrong. I finally deleted my account, too, since it's become too much. It does seem to me like it's very much in their interest of the media to keep attacking Facebook now that it's socially acceptable (Cambridge Analytica stuff and all), since Facebook's one of the companies that greatly influenced and interfered with their possibility to generate an income.
I'm emphatically not defending them here, but it may make sense from their perspective. They live and die by ad targeting and maintaining their monopoly power over social networks, both which requires large amounts of personal data. Knowing what apps teenagers have installed and how often those apps are used is instrumental in detecting an up and coming social media rival.
Further, their only major competitor in the ads space is Google, which has access to this information via Android and its control over the Play Store.
Plus, what are the teenagers going to do about it? Facebook also owns Instagram. I guess they could use Snapchat...
Yeah it makes sense, but the problem for their shareholders is their business model will constantly turn them into a huge boogeyman and they'll never recover goodwill that keeps their users stuck to the platform when competitors appear. Mass user migrations happen, and they can happen VERY fast. Wait until a whole country goes off of Facebook.
Why should Zuckerberg care? You know on the plane ride home from his visit to Congress, he was laughing about the old people who are so out of date they just don't get it. You know he was... He got a finger-waggin' from people he doesn't respect. He's not going to change.
Generation Z and even some millennials are starting to ditch Facebook. The company is desperate to know what the kids are into now. This is what growth-at-all-costs gets you.
When you have enough personal data to blackmail anyone who disagrees, you can go pretty damn far.
Eventually they'll encounter a hero, someone important enough who says "fuck it"
Its just speculation, that they're blackmailing anyone to make things happen, but anyone can see the incentive is there. Hard to imagine blackmail isn't just waiting to happen with that kinda data.
Why are there always so many people willing to defend the oppressor?
This story is full of people making apologias for facebook's shady behaviour.
I just don't get the urge in some people to defend the rich and powerful.
They don't need you to defend them, they are probably 100000x richer than all of us discussing this here put together.
This is an honest question because I can't understand the motivation behind it. If you are one of those people defending facebook, why are you doing it?
I have Google Fi and I'm not sure this is the same thing. The Google VPN doesn't install a root CA (AFAIK) and merely acts like a normal VPN. It does terminate at Google's servers so they can peek at your traffic, but not any more so than a traditional ISP.
The "secure" VPN dialog only comes up for certain WiFi connections that Google has some knowledge about. For example, when I'm in Chick-fil-A I get the "Secure this connection?" dialog, but I never get it at home or work. I've never had a need to disable it to reach local resources, but I'm guessing you could turn off WiFi and turn it back on to rejoin the network and not accept the VPN connection. I've never had the need to do that so I don't know if it would work or not. :/
No, this was different from the automatic per-known-wifi connection securing that it normally does. It was an actual VPN, that I had to manually remove because I could no longer access anything on my LAN.
It's no surprise that over and over again, Facebook has shown us that it will stop doing some of the bad things it does only when it's caught red handed. Until then, the employees who work there won't have any ethical qualms and the company won't care much about the impact either. Every time some news like this comes out, the PR department probably shrugs its shoulders and says "What are the users gonna do now? Quit our platform? Where will they even go to?" and just laughs out loud.
If a person were to adopt this behavior, we would call them a criminal. Facebook, on similar lines, is a criminal enterprise that hasn't been punished appropriately so far!
The article is misleading; the app does not have root access, it has access to web traffic via a trusted root certificate. The app is distributed outside the App Store through the enterprise developer program.
Yes, I know what the enterprise program is for; this is a clear violation of those rules. But Facebook is not getting root access to your device, as the title of this post and the article claims.
To get a good idea of what they are doing, visit whatsmyudid.com and follow the instructions to use SuperUDID (you don’t need to go through with it, just read the steps).
Similar to SuperUDID, they installed a profile onto their device that provided special privileges for Facebook.
This is actually not quite true. They only installed a signing profile, which still requires a prompt to install each app and wasn't used to do anything but install the research app. The "keys to the kingdom" is the trusted root certificate, which coupled with the VPN, made them your ISP and gave them complete access to all traffic in all apps.
I was showing how they would go about getting the trusted root certificate on the phone, which can be packaged with MDM profiles installed similar to the method used above.
Tangentially related, but his leads to an issue I try to stress this to my students all the damn time, but they just don't care. Like, they all try to download any free VPN that'll connect so they can play Fortnite during school, without ever looking to see what they give away. Hell, I'm not convinced they wouldn't consent even if it was told, as long as they get their fix while at school. There's a huge problem nobody wants to try to fix.
True, but as long as they don't install a trusted root as well damage is still far more limited. Still a major privacy issue (and huge shout out to you for passing this knowledge on to my generation, sadly many of us really don't care) though. With this, however, the possibilities are really limitless. For example, the simple act of filling out a job application online, common for students to do, gives FB your SSN and enough info to open a line of credit.
How was this data stored? Who at Facebook had access to the SSNs and nude photos and the like that was certainly collected with a program like this of any scale? Were the procedures to delete it? How were the systems secured? And while I doubt even FB would do this, a truly lawless bad actor could use those logins without tripping security alerts because you would have used one of their IPs to sign in before. Or an external actor, having access the data dumps, could sign up as a user and then make use of the VPN to easily pawn everyone's account.
The more I think about this, the more outrageous it is. They may as well put cameras in your house and photocopy all your papers.
From the students’ perspective, the reward is greater than the obscure potential consequence. I have certainly noticed this with adults, as well, so it seems like fairly basic human behavior.
The arms race between school administrators and people who wanted to connect to things they shouldn't has been going on for decades and is one of the major sources of sys admins. The problem here is that installing a VPN is far too easy and doesn't teach children where /etc/host is located.
At my school, they have default "teacher" for external events that use school facilities, usernames and passwords based on each school code that bypass all filters, but somehow no one has found it leading to insane workarounds. Sadly, people used to use web.archive.org as a workaround (like dude, just go home and look stuff up?) so they blocked it, which breaks a lot of research and sources on Wikipedia.
Certificate pinning. And perhaps warning your users about potential baddies if someone tries to change it.
Elsewhere in the article it mentions people were paid to screenshot their Amazon order history. Why would they do that if they could read all app traffic? My guess, Amazon is smart enough to use certificate pinning and/or not trust root certs
I don’t know how I feel about this. On the one hand Facebook is working around Apple’s terms and conditions, and enticing underage kids to compromise their phones. On the other hand, there’s no law broken here, Apple’s terms and conditions are not generally fair to users, and the kids know all of their data is being tracked for money ( and presumably are happy with it). Plus, freedom to use the personal device you bought as you see fit is preferred to Apple deciding what you can and cannot have on your phone. So, how should I feel about this?
> kids know all of their data is being tracked for money ( and presumably are happy with it)
Do they though? Do they really understand what it means? Do they understand that that nude they sent to their GF/BF is now on a FB server and FB has FULL rights to have and use that (obviously not publish it but still)? Do they just not care?
I really don't know, the last question kind of terrifies me TBH. My hope is they don't fully understand what they are giving up and "$20 is $20".
It’s hard to gauge how much they really understand, but it’s also hard to gauge how much it really matters. I mean most of the data coming out of them is noise, really. They’re not buying houses or making major financial transactions. If this is worth doing, then $20 may even be their primary source of income, and there may not be too many legitimate ways of earning that income at that age, especially passively.
Sounds like AllAdvantage.com (I had to look that up again) all over again. They tracked your internet usage and paid you money for it. My friends and I thought it was a good deal and had no compunctions about gaming it. I’m sure these kids are thinking the same thing, and our moral outrage is self-inflicted.
Is it possible for Facebook to have good intentions that lead to positive outcomes, or is everything they touch toxic?
I think maybe we can have more than one opinion simultaneously. I really like some of Facebook’s features, and Apple has some good things going for it. And it’s definitely fair to criticise both for their shortcomings.
Good point. This _should_ be illegal. And there is definitely a need for better data protection legislation, to protect consumers against exactly this power imbalance.
He's making a reduction to the absurd to show why the reasoning he is replying to doesn't work. It's a good form of argument, quite similar to a proof by contradiction.
Actually it's your reasoning that's faulty. All crimes are not equal. That's why society deems it appropriate to outfit different crimes with different punishments.
Let's not forget that until mid-20th century, adultery was illegal in most states...
> the kids know all of their data is being tracked for money ( and presumably are happy with it)
Yes, and?
Kids (and some adults) frequently send demeaning nude videos of themselves to public web forums (and promptly get harassed IRL). Are they being defrauded? Or just act that way, because they are naive and can't read emotions and body language across the computer monitor? Either way, the incident described in article is a clear children rights violation. Whether parents properly understood and condoned it or were defrauded by Facebook, — does not really matter (I suspect, that if any of them faced a trial, they all would claim the later).
> and the kids know all of their data is being tracked for money ( and presumably are happy with it).
Yeah, obviously, kids are very good at understanding the consequences of their actions. I mean, imagine if they didn't! We would need special rules in criminal law for dealing with young offenders or something!
To me this is a reflection of society at this point. Monitoring & tracking of online activity was something to be avoided when I was young. I can only assume something like this would harm the reputation back then and people would go elsewhere. Although, $20 per month makes me wonder if my past self would have cared and all my info would be data mined. I doubt in the end I went unexploited in some way and likely I missed out on little money that's now being offered because of more secure devices.
Damn, I tried DM'ing some security researchers about this a week ago, looks like I should have just sent it here. I've got some added details from my research into it.
1) Regarding distribution channels, I have only once had the program advertised, via an Instagram ad. I have my real age on Insta (I know, I know...) so targeting younger users may have played a role. I first saw the ad in June 2018, and decided to click through to see how bad it was from a security standpoint. IIRC I never installed it, but I got an email to my throwaway account a few weeks ago asking to reinstall, so I decided to give it a run-through for research. They refer to is as "Research Application," and avoid mentioning FB, their email the first time was facebookresearch@applause.com and it is now sent through a mailer with no mention of FB in email address. The contractor was Applause/uTest, they offered $10/month via PayPal (which <18 technically aren't allowed to have). Since uninstalling, they sent an email saying it hadn't heard from the app in 24-hours and you must participate 23/days month to be paid.
2) The install link is at https://m.facebook.com/facebook-study/f8854f1fb9f4f57bf0d861..., and the IP used by the VPN is vpn-sjc1.v.facebook-program.com / 185.89.216.194. On iOS, the "Connect on Demand" feature is used to render the normal VPN off switch useless, one must uninstall the app or turn off COD on the VPN info page. Outgoing traffic goes through a regular FB IP (I wonder if any IP-based authentication on their systems might be weakened by doing this?).
3) I definitely agree with TechCrunch that this is an Apple ToS issue, however, they are wrong to say that FB "avoided TestFlight." TestFlight is for closed betas only, and this app is not a beta of anything, so it is patently ineligible. Interestingly, if Apple revokes their cert in response (as they due to shell company certs used for sideloading marketplaces), it would result in an immediate shutdown of all Facebook's legit internal apps, because Apple only (afaik) issues one cert to each DUNS number. Notably, the cert says "Facebook, Inc. (In-House)" not just "Facebook Research, Inc." so it looks like the main cert. I've sent a complaint to Apple Privacy about this, will report back if they reply.
3b) Apple's Enterprise Dev Program ToS[1] excludes from allowed internal apps those that are, "used, distributed, or otherwise made available to other companies, contractors (except for contractors who are developing the Internal Use Application for You on a custom basis and therefore need to use or have access to such Application), distributors, vendors, resellers, end-users or members of the general public." It does allow the use of written, binding agreements to enable contractors to use the app, but it seems doubtful that this would extend to those ostensibly participating in social research for a nominal compensation.
4) Most users need to be clearly told that installing a "trusted root" cert is the keys to the kingdom. Providing a normal VPN honestly wouldn't be that bad, as TLS protects everything but the domain name. So they could see "morpheuskafka made 200 requests to reddit.com in an hour," but not the content, much less my login and password. Most people who know what a VPN is are familiar with the idea of their network traffic being rerouted and monitored at the ISP level, but they could easily think they were installing the VPNs server certificate or a client certificate to access it. It's staggering to think that . Also remember that Facebook owns the certs for its own platforms, so they could (ethically) monitor your use of their own services w/out this. Remember "don't even give the IT people your password?" Fill out any login in form and FB has your password (and can use the same IP to sign in without raising suspicion). Job or college app? SSN, tax info, etc. Only e2e is safe. Notable, Caddy's MITM detector cannot detect this "research app."
I hope they do revoke the signing cert, and will enjoy seeing all their internal apps stop working in the chaos. And I hope that Google and other large companies send password change warning to anyone who has logged in from these IPs.
I’m sure this was approved by product counsel (most features go through legal reviews at large companies like Facebook) and was justified (even using the Enterprise certificate) by arguing that the teens are actually contractors working for Facebook who’re being paid and are have consented (along with their parents, if necessary).
Your phone doesn't just contain private data about you, it contains private data about people you know. Selling them out like this is a scumbag thing to do, not to mention it's probably a TOS violation of every service you interact with. FB basically solicited people unwittingly commit crimes on their behalf.
I'm curious what devs working at Facebook feel about the shitstorm surrounding Facebook the last couple of years. Do you still work there and is the money the only incentive for doing so? Would you jump the ship if you got the chance? Or are you ok with everything that Facebook does?
A simple UI change like making the root certificate trusting UI look like you're about to do something extremely dangerous would likely stop a lot more users from giving their data away like this. But instead, Apple shows you a benign-looking "warning."
I'm really curious, how the average Facebook engineer feels about this data mining and when is the point when they think they should stop building tools to allow Facebook to do this.
Isn’t this....kind of what the entire tech community has been PUSHING Facebook to do? Paying you for your data instead of acquiring it through shady means...?
I’m sorry but...it’s not as if they’re using this data to do evil things. They’re trying to target advertisements. Whooptie doo. So evil.
Didn't want to edit my already too long comment, but it's worth noting that on a closer read of the Apple Enterprise Terms, they state that they have the ability to notify users at 5.3: "[Facebook] understand and agree that Apple may notify end-users of Covered Products that are
signed with Apple Certificates when Apple believes such action is necessary to protect the
privacy, safety or security of end-users, or is otherwise prudent or necessary as determined in
Apple’s reasonable judgment." We need to call on them to do so immediately and fully remove the profile, apps, and certs issued by this and any other programs of FB.
I don't recall the app having any mechanism to filter EU/EEA nationals, so the GDPR shitshow is about to explode in Facebook's face as well.
There's only one activity you do when watching TV and that is... watch TV. Even on SmartTV sets, the amount of data the screens can gather is limited in scope and quantity.
There are all sorts of things you do with your smartphone and this VPN tracks all of them.
Just because it makes you uncomfortable doesn't mean it makes everyone uncomfortable.
I agree with the previous user that it isn't that different than what Nielsen does to collect TV ratings.
Maybe Facebook's execution here wasn't the best, i.e., I agree that a better device would have some limitations as to what kind of data it could access.
The HN title/article is slightly misleading with its use of "root access"; the iOS Enterprise Root certs give increased data access outside of an app (e.g. decrypted SSL traffic, like what this app was doing), but not "root access" in the Unix sense.
You can play with them using mitmproxy to generate a Cert and intercept SSL traffic.
It is "root" in the sense of "root certificate"[0] which allows, potentially, the ability to MITM essentially any TLS connection transparently to the user.
Any non-pinned connection, since some apps employ certificate pinning on iOS and can't be intercepted without modifying their application binary. This is uncommon, though.
Agreed, I thought this must be Android then I saw Apple in the article and got really confused. I understood later when I got the root certificate but I've installed a number of those and would never call it "root access". This just allows them to MitM all traffic even TLS.
Ok, we changed the title to use representative language from the article, which is hopefully more accurate. (Submitted title was "Facebook pays teens to download research app with root access outside App Store".)
Indeed, for a second I thought Facebook had figured out a jailbreak and were paying people to try it out. That would certainly be very interesting news (and provoke even more divisiveness in the comments.)
Fortunately, the title has been edited now to remove "root access".
Please don't post dross comments here, regardless of how angry you are about Facebook or anything else. It lowers the signal/noise ratio and we're hoping for better than that here.
I'm not happy with Facebook or Zuckerberg either but that message is so old that it's not even relevant anymore. Do I trust Facebook or Mark? No. Does it have any anything at all to do with that cringy text? No.
I don't know about you but when thinking of all the crap I said over a decade ago it would be insane if someone tried making it out as if it somehow presently reflected on me in any way.
Guessing that there are plenty of employers who would take the opposite view. There are surely companies who want employees who will do stuff without complaining about a bunch of annoying ethical implications.
That is very evidently not the same. They're decrypting data from all communications.
I'm not coming out and calling you a shill, but you seem to defend Facebook in all of your Facebook related comments. I'm curious why you seem to think facebook deserves to exist in it's current dystopian form. That you would defend them for decrypting literally all private and unrelated-to-facebook data of unaware and vulnerable teenagers strains my credulity.
Unbelievable. You keep refusing to address the point of 'decrypting literally all private and unrelated-to-facebook data' and claim it's 'usage statistics'.
S/He's really making FB look bad. I'm guessing they're an employee. Not sure if it's better or worse if they're some kind of social media PR person or just a dev/tech who has drunk the Koolaid...
Of course, they control the entire operating system and the app store. They don't need to figure out which apps your using by the traffic that's being sent.
As per their policy:
"We may also use personal information for internal purposes such as auditing, data analysis, and research to improve Apple’s products, services, and customer communications."
Even if we accept your premise that OS vendors are inherently malicious actors rootkitting your devices, how does that justify Facebook’s own unethical activity?
It doesnt...what the commenter won't acknowledge is that Facebook doesn't care about what anyone thinks is ethical or unethical...Facebook only cares about the Data you're giving them to harvest....this is pretty common with most Facebook employees i've interfaced with...General lack/disregard for anyone or anything that disagrees with the Facebook "mission" (Paycheck)
Children don’t have fully developed minds, can’t appreciate the consequences of their actions (especially in the long term) and are easy to manipulate with bribes ($20/mo).
This isn’t a bunch of adults deciding to sell their privacy. It’s children who have no hope of understanding what they’re doing.
To be fair, you could replace the word children with the word adult in your first paragraph and it would all still hold.
Adults might have fully developed brains from a biology perspective, and I recognise this is what you meant, but I believe there is a strong argument to be made that many adults, myself included, are heavily lacking in the development of mind department. Mark Zuckerberg surely is, either that or he’s actually Satan.
I definitely have an undeveloped appreciation of the consequences of my actions, and I’m easily manipulated. I’m rapidly approaching 40 laps around the sun!
My greater point here is that I don’t find your argument for why it’s worse because children particularly compelling. Or, perhaps, insufficient. So I’ll replace with my own:
Society at large has a long history of, and a cultural and biological evolutionary adaptation, to protect children more strongly than adults, because we are born vulnerable and take a long time to reach sexual reproductive age. We’ve only made it this far because we’re not descended from parents who let their kids stumble in to sabretooth tiger territory. (As an aside, I appreciate that the greatest threat to children’s health and development is their immediate family. But here we are).
The worrying thing is, now the sabretooth tiger is a guy who’s surname translates from the mother tongue, German, to English as candy mountain, Google translate actually says “pile of sugar”, and comes with a family friendly large blue thumbs up symbol. So the threat is difficult to discern.
I’d actually be more worried if I didn’t have a seizure like laughing fit every time I think about the whole scenario. It’s a coping mechanism I guess.
I mean, is this really happening? I wish Bill Hicks was still alive! Aaah, he lives on through those who carry the flame!
But on a more serious note, children don't necessarily understand the consequence of their actions, especially at a technical level. Mind you, plenty of adults don't either, but their acceptance of the terms of conditions of something or other is contingent on their being responsible enough to accept any negative consequences that may come; children are not in that position.
A culture that doesn’t have a strong child-protective drive, and a strong drive toward kindness and treading lightly, is likely to disintegrate and eventually destroy the each other and the world.
As evidence I present: the current state of affairs!
#1 "they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI."
#2 "the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like."
My editorializing - I have been suspicious of Facebook getting the "submarine" treatment (1) but the insane scuminess of #1 above, which essentially is a big fuck you to Apple, pretty well supports the recent view that FB will essentially break any rule that serves to further their own ends.
via https://twitter.com/chronic/status/1090394419902197761
(1) http://www.paulgraham.com/submarine.html