> "Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple," Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
 I previously posted this development as a story, but it got flagged as a dupe. It seems like this is the more truthful angle. since Apple seems to be the real agent that caused this shutdown.
I think I found it? Was it called onavo on Android? https://play.google.com/store/apps/details?id=com.onavo.spac...
It is only available via referral.
We collect information such as:
* The apps installed on your phone
* Time you spend using apps
* Mobile and Wi-Fi data you use per app
* The websites you visit
* Your country, device and network type
We use this data to:
* Improve and operate the app
* Analyze apps usage
* Help improve Facebook Products
* Build better experiences for our community
Reasons for the contrary (Apple could not have known) my premise is incorrect, the metadata is insufficient, Facebook or in-house apps in general are much more widely use than presumed and often deployed from a public facing servers resulting in a location distribution too similar to the actual US population distribution, etc.
I would hardly expect Apple to be scrutinizing the install patterns of an enterprise app. Those resources would be better spent improving the App Store review process and TestFlight.
I have two objections to that. First, they're almost certainly not making that deal from an informed position. Security and privacy are very complicated matters, and most people won't even realize to ask questions about data retention, aggregation, sharing, differential privacy, etc. Instead, they'll substitute in an easier more salient question (https://en.wikipedia.org/wiki/Attribute_substitution), like "Is $20 a month something I want?"
The second issue is that MITMing all network traffic on a phone will necessary scoop up the user's credentials, as well as private messages and metadata from that user's friends and family. It's naive to think about privacy as something an individual can accomplish. When you sell data from your networked social device (i.e., your phone), you're also selling out all your friends and family. Given all that really hot data, what incentives are there for the data collector to act responsibly and protect that data? The reality is pretty much none.
After spending an hour reading similar comments to yours, I have to ask this question.
How informed should the user be? What qualifies as an informed user? This is getting into some dangerous territory because it because implies so some sort of contract literacy.
>The second issue is that MITMing all network traffic on a phone will necessary scoop up the user's credentials, as well as private messages and metadata from that user's friends and family.
Do they not already do that with access to Facebook Messenger and Instagram? Why is this not screamed from the top of every hill?
This data collection is also different because they literally say it is for research.( They are protected legally, unfortunately)
>Given all that really hot data, what incentives are there for the data collector to act responsibly and protect that data?
Hmm how about getting fined?(The issue is how much should they be fined and the answer in my opinion should be similar to how the SEC prosecutes for insider trading: fine on top of whatever
you made, to strongly discourage you from doing so again or jail them.
As a company they want to collect data for whatever reason (a research project, training data for an ML system, creating a new product, refining an existing one or assessing the overall market). So they built software for data collection and are paying people money in exchange. How is this different from Amazon Mechanical Turk or any crowdsourcing platform which pays people for data?
I admit I haven't seen what the contract looks like but are people suggesting that the end user still does not understand that they are providing user data to be used by Facebook in exchange for money or have no power to decline such offer?
But it's much easier for the press to run click baity titles like "Facebook spies on teens! Again!!" and many folks on HN quickly jump on the bandwagon.
That said, the part of this backlash I sympathize with is if someone hates benefiting FB and have any of their data in their hands but then has a friend who uses this service and uploads all their collected chat conversations to FB.
and no, I don't work for them.
> Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
It sucks huge amounts of power, I assume the facebook app is collecting all sorts of nefarious metadata off phones at no benefit to the user. Fine grained controls would allow users to prevent facebook from accessing their information without the user's explicit choice, which would be a huge improvement.
There should be a big "Facebook wants to read and analyze ALL of your contacts. OK or Cancel" prompt before each read, or you should have to explicitly allow "Allow Facebook to read and analyze all of your SMS messages at any time"
Apple's revenue was a disappointing what 60 some billion dollars last quarter based on devices that have a multitude of functions beyond facebook. Not having facebook available on the device would probably decrease their value somewhat but with facebook getting tons of bad press it could work to apple and alternative social networks long term benefit by further cementing Apple as protector of your privacy and Apple blessed social networks as better than that trash android users run on their comparatively open platform. Users already don't seem to value openness much and place higher stock in privacy and security.
Facebooks was in the same time frame around 16 billion. Facebook depends heavily on access to mobile platforms to reach users. Apple is only 10ish % of the global market but around at least 1/3 of the US market which facebook needs. As people are unlikely to chuck their $1000 phones in the trash before 1-2 years are up it seems likely that most users would keep their facebook account to keep in contact with friends/family on facebook but come to rely on other chat apps on their phone and may ultimately use those apps instead of facebook on and off their phones.
From facebooks perspective its users don't immediately drop but engagement can drop through the floor for that segment while offering substantial help to competitors that may in the long run turn facebook into myspace 2.0. Apple losing facebook would be a negative. Facebook losing apple would be an existential threat. If removed from the play store as well it would basically be dead in the water. This would probably never happen because if apple and or google threatened to do so it would basically have to give apple and or google its left arm and throw in a leg to secure its continued existence. It's bargaining position with either is terrible.
I think both should work together to force facebook to do a better job of protecting its users privacy.
As an aside I think that in the longer run it could probably consider bypassing the respective app stores entirely to work entirely as a website but don't think that would be likely to be a short term prospect.
Apple has $245 billion cash on hand. That's enough to build their own feature equivalent social network and give $20 to anyone who signs up.
Fb only stop doing things that are obviously morally reprehensible when they get caught or called out in the media. They never proactively doing the right thing. :-(
Also, Google updated API certificate behavior to only trust built in roots by default.
https://android-developers.googleblog.com/2016/07/changes-to... might explain why "project atlas" is only available for Android devices marshmallow and earlier (they can't snoop encrypted app traffic on later versions)
Everyone pin your certificates. If this was standard practice none of it ever could have happened.
He turned down our offer to take a job building a data analytics platform at Facebook. "Oh it was just too exciting tech not to work on."
I mean consider this scenario: A, B, C, D, E are in a meeting to discuss a new project that A has planned. D and E like the idea. B is not committed either way. C says they can't see the project being a good thing but uses the magic words "disagree and commit". How often does it cause A to go back and say ok maybe my idea was not good?
The solution to these problems is oversight from security, compliance, and privacy on all systems dealing with consumer information, and having privacy education for all employees on regular basis. GDPR is a step in that direction.
Come on guys we just want to build a better ad what could possibly go wrong!
That's another interesting move.
Ouch. Yes that's really bad, even if it wasn't a violation of Apple's terms of service (it was, apparently).
Wonder what it will reappear as next?
"We totally apologize for making this mistake, which is totally on me and we promise to clean up our act and behave better in the future and we're sorry if you feel offended"
apology by Mr. Zuckerberg a few month later.
(inspired by https://knowyourmeme.com/memes/milkshake-duck)
The apple gate keeping here is terrible.
Or maybe Apple took it down because - as you said - it violated the store guidelines. Or Apple just encouraged them to do it by themselves (for a better PR). So effectively, they were forced.
The rules state that Enterprise apps must only be in the hands of users who are under the supervision of the company which operates the Enteprise certificate. It's likely that Apple doesn't consider the people Facebook have co-opted into this program to qualify
There is absolutely no information about this time. Actually according to the content it is again Facebook and not Apple who removes the app.
1) Facebook said to the media that they’d stop distributing it to iOS users.
2) Apple revoked Facebook’s certificates from the enterprise signing program.
I have previously worked with developers in start-ups and was amazed to hear some of their backgrounds. One previously worked for a company that bundles spyware with freeware MSI products. Another worked for an airline agency, where they advertise fake a discount on tickets when the price was in fact higher than market. Needless to say, their actions outside the work environment matched their actions inside.
A $200k/year graduate starting salary is enough to get a lot of people to set their values to one side for a while.
Personally, this doesn't really go against my morals or values. They installed the app. No one forced them to. Not everyone has the same set of morals and values you do. If it's not illegal and no one is harmed. I don't care so much.
I can't prove a negative. So you would have to prove the positive, what harm came from this app?
Would you say that the current political climate has been harmful? I would. I would not only say that, but I would also say that it's come about specifically because of data collection practices which have enabled data analysis of entire populations in a way which has never been seen before.
While analysis of populations is not inherently bad by itself, the outcome certainly has been, particularly because the abilities and effects are so new that there's little or no law governing people towards positive effects for society as a whole. Without those laws, the effects become geared toward benefiting people at the top either directly (company CEOs) or indirectly (capabilities sold to those who can afford them).
Seriously, look at the state of peoples finances. That's the cause. Look at the amount of money the rich have compared to those who voted for Trump, Brexit, etc. Look at the quality of life. This has been coming for a long, long time. Blaming on Facebook and Twitter is not very insightful of the state of affairs in the world today.
My understanding is the entire Cambridge Analytica thing is about privacy and not actual harm from the Facebook aspect.
At scale, such behaviour cannot possibly be good for democracy.
But it seems very much like there is an attitude like working for Facebook or Google is amoral. That I don't believe. Especially when there are companies sell software to track phones secretly to countries like Iran. That's on the border of amoral.
Then why do you assert it as being truthful?
The history of FB's data collection practices is replete with "harm".
And prove the "harm" in its data collection practices. Since you've been very specific in data collection, I will expect your response to be about data collection.
It’s an ongoing source of debate and confusion as to what Software ‘Engineer’ means.
Someone likely will do nearly every unethical thing, but that doesn't mean it is right for anyone to do it.
It would be too much to expect engineers to somehow be angels without the threat of punishment when even doctors need these systems.
But on the other hand, imagine if you heard about this agile thing but you can’t legally apply it because waterfall is mandated and if you apply agile you might be stripped of your license to practice software engineering.
This isn't a very meaningful statement as people widely disagree about what the ethical action is.
Facebook in general seems to have good intentions and terrible awareness of what is crossing a line. They really wanted their platform to be the center of everyone’s lives and now appear completely unprepared for the consequences of pushing that agenda so hard.
-- Upton Sinclair (https://en.wikipedia.org/wiki/Upton_Sinclair)
These are good people. They just don’t care.
I think these two statements contradict each other.
Like most things, these things are points along a spectrum, not absolutes. Even that is probably too simplistic - points in n dimensional space I guess :)
It's a given assumption that anyone who once worked for a shady company is also shady?
There's a reason that these companies do what they do. Fraud, breaking the law, and dark patterns are way easier ways to make money than doing so do ethically and legally. Companies either die a hero or live long enough to become a villain.
I'm exaggerating a bit, but realistically the list of ethical tech companies is a lot shorter than the list of unethical ones.
There are levels and degrees of bad behavior.
Facebook is rotten from the head down.
The data collection gives power to those who hold the data. Having all data on someone gives absolute power. It can be both looking at intimate data to directly manipulate or dominate a person, or apply data science to an incredible detailed corpus of data, and in the future learn things about individuals and groups that we can't even fathom.
> literally killing someone
You’ve presumably not read about Myanmar or fbs involvement in organising genocide?
It mostly bothers because it seems like a great deal of the recent hate towards FB is because people blame them for Trump. If Clinton wins I don't think people care.
That said, yes I think FB should do a better job of "know your customer" when selling ads. Obviously they shouldn't sell ads to Russian intelligence. But it doesn't bother me all that much. For one, I don't think it had that big of an impact. For two, it's a dangerous path to go down when you start questioning if your political opponents have the right justifications for their votes.
>You’ve presumably not read about Myanmar or fbs involvement in organising genocide?
Which part of that involved a FB employee going out and literally killing someone?
In relation to fb, do you see any problem with what happened in Myanmar? They allowed the organising of a genocide.
If FB hires these people, is there a difference?
Do you think a random admin assistant knows more than a techy teenager about the dangers of data privacy? Or is she just trying to scrape by for another week of rent?
This sounds like alarm from the rich and well to do about an ‘immoral’ way for the poor to make money.
No harm done...
Facebook makes it exceedingly easy for people to hate them, while technically doing nothing wrong. Well sometimes they break the law, but then Zuckerberg just says he's sorry and we all move on.
It's getting tiresome to watch our industry behave recklessly in the pursuit of ad revenue and the push to collect more and more data about the unsuspecting public.
I enjoy my work in IT, but companies like Facebook, and increasingly Google and Apple, then entire IoT business and a ton of startups are increasingly pushing me towards being less connected, less interested in new technology. It seems like a huge chunk of industry is misguided. I'm sick and tired of it.
The ad-supported business model really has corrupted tech to the core.
In my experience large majority of non-tech people do not care much about their data being collected and would be happily accept monetary reward for things like their location history, usage stats etc. Assume that data collector is using this information to show you relevant ad. I wonder why would it be so much bad to see relevant ad then irrelevant ad? It's not that you are going to stop seeing ads anytime soon.
So the question is: If data producer is happy to sell the information and data collector is using it to enhance the user experience (i.e. no evil intent), would you still consider this bad or immoral?
Did it explain the consequences of installing a root certificate through?
I guess the first question is, if I do something illegal, but either avoid being captured or manage to otherwise avoid prosecution by using money or influence, have I done something wrong, legally speaking?
Because there is an argument to be made that making money off of children's consent to contracts is legally wrong, but because of the widespread nature of doing such by those who have enough political power to impact how laws are enforced, the laws are not enforced to protect children. That we aren't correctly enforcing laws that protect taking advantage of a child by using consent that can't legally exist does not make the action legally right, or if it does, then we are effectively invoking the notion of "it isn't illegal if you don't get caught".
Google is particularly fascinating, their employees protest when their parent company want to be a supplier to the US military, but apparently they have no moral objection to building detailed profiles on users (and non-users) in order to sell them more junk that they don't need and can't afford.
The whole data collection thing is so abstract that the same people who don't want their employer to kill people with drone can't see the immorality of invading people privacy. If that's the case, then how can we expect the layman to understand or care?
I am not accusing them of "nefarious" political motivations, I respect their right to push their own politics. I wish they were more transparent, though.
True. But Facebook has managed—by demonstrating a repeated recklessness, unwillingness to reform and immortality that seems fundamental to its culture—to energise a vocal minority in a way no other tech company has. This vocal minority, moreover, is wealthy, politically connected and bipartisan.
This is already resulting in costs and reduced strategic flexibility for Facebook’s senior leadership. I expect it to turn into an existential threat for the company, as it’s currently organised, in the coming years.
How many people would be comfortable if you laid out the full spectrum of what is possible with the data that is collected, and how said data is distributed and sold?
Its frankly a massively dishonest claim that people "don't care so its all fine".
My experience is that most people are surprised and mildly horrified that Facebook has most of their browsing history.
All those new accounts accusing critics of hating Facebook are really rather ineffective.
less than 8 hours after the original post hits #1 on HN.
Could Google Play?
But Apple surely could remove Fb from the App Store.
I am aware that's not a wanted discussion topic normally, but if behaviour related to an article about dirty behaviour smells dirty it feels a bit relevant.
(in contrast, the similarly highly-upvoted Facetime bug thread  seemed to stay up longer, but no official fix had been made in the ~18 hours since it was first discussed)
Facebook will shut down its c̶o̶n̶t̶r̶o̶v̶e̶r̶s̶i̶a̶l̶ dastardly market research app for iOS
"Your honor, I swear! He clicked on the button saying he was over 18!!! How was my porn site supposed to know he was lying?"
If Facebook wanted to do something useful, it could devote some of that $400 billion empire into identity verification and ways to prevent deceptive practices online. Something that goes beyond the Stone Age practices like SSN and Captchas that we rely on today.
Now the new controversy is data. Don't ever sell your data! Don't give up your data! But teens are doing it widely and profusely.
At some point you just have to accept that new people will be born and will have new ideas and just won't give a fuck. As a business you can either sit on the sidelines and watch or capitalize on it.
Facebook will lead the way to allow for smaller players to get away with doing this as well.
That's not how any of this works. It's hard to 'move fast and break things' when a company like Facebook has used all the loopholes and gotten caught.
Small players will never be able to replicate Facebook or their scummy tactics because at this point Facebook wants regulations. Regulations will ensure Facebook can and will continue to operate (can pay any fine) while the competition is hamstrung. If your competition is hamstrung it makes it easier to buy em up or wipe em out.
Just to come full circle Facebook purchased onvo so they could spy on users and figure out which apps they were using and how often. Facebook then used this data to buy up companies.
I don't believe the behavior you're describing actually falls along generational lines, though, rather than technical competence or awareness. It isn't only "teens" who use social media, nor is it only "old people" who are concerned about privacy. In fact, younger generations are leaving Facebook and social media because of privacy concerns and the negative effect it has on their lives.
Your arguments here are stereotypical ageism and lack enough nuance to be convincing.
* Apple is probably the least guilty here, however as long as their OS remains closed, we don't actually know how much personal data they are harvesting.
This is quite a step up from sending home some telemetry data.
Facebook's market research app isn't "redirecting ALL INTERNET TRAFFIC through their servers" either. If you read the article it states that the app monitors phone and web activity and sends it back to Facebook. This is not really any different from sending home telemetry data. Google, Microsoft and Apple all send home encrypted data, most of which is not verifiable using Wireshark.
I'm no fan of Facebook, far from it in fact, but at least this is an opt-in service and users are being compensated for sacrificing personal data. The same can't be said for the other three companies.
What's next? People will start harassing students/researchers that do paid studies?
Pushing the app distribution thing aside, being aboveboard in every other respect is still insufficient justification for ethically questionable processes.
Facebook has a history of unscrupulous untrustworthiness which should not be overlooked when examining the implications of the scheme, particularly the requirement to install a root certificate. To ignore the context of the polemic, to pretend Facebook is just another company rather than one of the largest collectors of personal, private information on the planet, multiple times caught invading peoples' privacy through less than honourable means, is foolhardy at best and dangerous at worst.
Will it blow up in your face? Perhaps. I could not care less. It's your problem. You didn't know what you were being asked for? Again - it's your problem.
People agreed to do that on their own, they got paid for that, and, most likely, they don't care if FB knows what kind of porn do they browse.
All the stuff about ethics and "honourable means" is irrelevant in this argument. Is war ethical? Is spying honourable? Depends on whom you ask.
>requirement to install a root certificate
Requirement? You can just tell them to f*ck off.
In any case, I could not care less about this, but what annoys me is the people that pretend to be super-nannies that gonna save the world by telling what the others should do. Through history, this has never worked.
That isn't the point. The point is that Facebook preyed on technical illiteracy and a general and widespread lack of understanding of the implications of participating in the program. Effectively, the majority of participants were tricked. The age range for participants also included people who were not of age, and therefore not legally responsible for their actions. Facebook must accept that responsibility.
The point is that Facebook flouted Apple's guidelines for the distribution of apps outside the App Store, showing a blatant disregard for the protections put in place to protect consumers from bad actors. Facebook has positioned itself as a bad actor through their actions, not only in the questionable collection of data the implications of which the users will likely be unaware, but also in how such a program was distributed: in direct violation of the protections offered by the App Store.
The point is not about the choices made by the end users, but rather the unscrupulousness of a big company that knows better — but has done this before, far too many times.
Finally, the very nature of online interactions in the modern era means that people aren't just signing away their own privacy but also, to a lesser extent, the privacy of those with whom they interact. Facebook is perfectly aware of this potentiality, but users are not and, on the whole, will never be because it isn't their job to understand the technical dimensions of online communication. A big company like Facebook, however, does know and should have behaved accordingly.
> [...] is irrelevant in this argument. Is war ethical? Is spying honourable?
What do these two examples have to do with the actual circumstance? "Market research" is not war, and it certainly should never be construed as synonymous with spying. Do not construct a strawman against which to argue, it demeans your argument.
> Requirement? You can just tell them to fck off.
You could, but then you would not be adequately participating in the research. You would not be using the VPN as described. You would not earn the $20. There would be no discussion.
Of course, you're failing to account for the fact that none of the participants will have had the privacy and security implications of the root certificate explained to them in a way that made sense to them. They'll have simply followed instructions to get their money.
I do not think that people should ever be blamed for being deceived as to the severity of their actions in situations such as this; a big company like Facebook does not escape scrutiny here. Clearly you believe differently, although the downvotes will tell you how well-received such a laissez-faire attitude to other peoples' private lives and preying on their technical ignorance is seen, so I shan't bother to comment any further.
> what annoys me is the people that pretend to be super-nannies that gonna save the world by telling what the others should do
Thankfully, that's not the situation at all*, and I fear you're simply projecting some negative feelings on to this article in order to justify having an unjustifiable gripe.
Here's what's happening:
- Facebook previously had a VPN service that it advertised as being for market research purposes. It was removed from the App Store.
- Facebook then started using Apple's alternative app distribution method intended only for use in enterprise situations, not for the general public.
- They were found out.
- Facebook voluntarily ended the program for iOS users.
- Apple revoked Facebook's certificate as punishment for flouting the rules.
Who is tell whom to do what, here? Facebook did many things wrong, were found out, and were punished appropriately. The technical details of their actions were analysed and found to be vastly overstepping their bounds, yet in step with their continual and repetitive breaches of personal privacy.
There's nothing more to it than that, so put the strawman back on the farm where it belongs.
I'm very interested to see what Apple's response is going to be. I'd not be shocked (in fact I'd be delighted) to see them penalize FB in some way, perhaps suspend their App Store account or something.
OK let’s imagine that you have a close friend or family member with some confidential issue - maybe an illness, maybe debt, maybe they are in the closet. Occasionally they message you, on old -fashioned SMS or email mentioning something about it.
How many dollars is a reasonable trade to tell a data collection agency everything you know so they can add it to their file on your friend/relative?
I probably wouldn't, unless it was in excess of maybe >$1000/month. And even then I'd probably just get a new phone. But people should have the right to sign contracts, even if they seem exploitative, as long as they are aware of what they are agreeing too.
The main problem it seems here is that a lot of the people were underage.
Even if you stick with apps like Telegram or Wire (my choice), you have to have in mind that your phone might have a keylogger on (looking at Xiomi and Huawei).
Well there’s the rub isn’t it. I don’t think most people would consciously decide to rat out their friends secrets for $20 - or indeed for any price. But somehow, it’s happening.