Hacker News new | comments | ask | show | jobs | submit login
Facebook will shut down its controversial market research app for iOS (theverge.com)
261 points by flylib 17 days ago | hide | past | web | favorite | 218 comments



Previous installment of the discussion here: https://news.ycombinator.com/item?id=19031055.


Facebook has had its enterprise certificates revoked by Apple [1]

https://seekingalpha.com/news/3427520-apple-banning-facebook...

> "Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple," Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

[1] I previously posted this development as a story, but it got flagged as a dupe. It seems like this is the more truthful angle. since Apple seems to be the real agent that caused this shutdown.


"You are not firing me, I'm resigning!"


It is still available on Android.


Every day I am happier having switched to iPhone after being an Android fanboy for ages.


You could of course just continue not to install obvious spyware just like you presumably do on your desktop computer.


But that won't justify their newfound tech hate (this time on Google) ...


What is it called? I don't see it on the play store.

I think I found it? Was it called onavo on Android? https://play.google.com/store/apps/details?id=com.onavo.spac...



Scary notes for Facebook's Onavo app, and it seems you don't even get paid for it on android?

We collect information such as:

* The apps installed on your phone

* Time you spend using apps

* Mobile and Wi-Fi data you use per app

* The websites you visit

* Your country, device and network type

We use this data to:

* Improve and operate the app

* Analyze apps usage

* Help improve Facebook Products

* Build better experiences for our community


I find it hard to believe that Apple didn't know. I would expect any device with a in-house app installed to be reporting metadata about said app back to Apple. Comparing the number of installs of a particuler app + the distribution of IP addresses/locations where this app was installed vs that of a normal in-house Facebook app would likely show a discrepency.

Reasons for the contrary (Apple could not have known) my premise is incorrect, the metadata is insufficient, Facebook or in-house apps in general are much more widely use than presumed and often deployed from a public facing servers resulting in a location distribution too similar to the actual US population distribution, etc.


Facebook has offices all over the world and employees testing with devices on home networks all over the world (not to mention developers giving builds to family and friends, potentially).

I would hardly expect Apple to be scrutinizing the install patterns of an enterprise app. Those resources would be better spent improving the App Store review process and TestFlight.


I've been reading a lot of comments about this, and something that always comes up is the question "why shouldn't people be able to sell their data?"

I have two objections to that. First, they're almost certainly not making that deal from an informed position. Security and privacy are very complicated matters, and most people won't even realize to ask questions about data retention, aggregation, sharing, differential privacy, etc. Instead, they'll substitute in an easier more salient question (https://en.wikipedia.org/wiki/Attribute_substitution), like "Is $20 a month something I want?"

The second issue is that MITMing all network traffic on a phone will necessary scoop up the user's credentials, as well as private messages and metadata from that user's friends and family. It's naive to think about privacy as something an individual can accomplish. When you sell data from your networked social device (i.e., your phone), you're also selling out all your friends and family. Given all that really hot data, what incentives are there for the data collector to act responsibly and protect that data? The reality is pretty much none.


"First, they're almost certainly not making that deal from an informed position"

After spending an hour reading similar comments to yours, I have to ask this question.

How informed should the user be? What qualifies as an informed user? This is getting into some dangerous territory because it because implies so some sort of contract literacy.

>The second issue is that MITMing all network traffic on a phone will necessary scoop up the user's credentials, as well as private messages and metadata from that user's friends and family.

Do they not already do that with access to Facebook Messenger and Instagram? Why is this not screamed from the top of every hill?

This data collection is also different because they literally say it is for research.( They are protected legally, unfortunately)

>Given all that really hot data, what incentives are there for the data collector to act responsibly and protect that data?

Hmm how about getting fined?(The issue is how much should they be fined and the answer in my opinion should be similar to how the SEC prosecutes for insider trading: fine on top of whatever you made, to strongly discourage you from doing so again or jail them.


I agree. I've seen Facebook doing shady things but I don't think this is one of them.

As a company they want to collect data for whatever reason (a research project, training data for an ML system, creating a new product, refining an existing one or assessing the overall market). So they built software for data collection and are paying people money in exchange. How is this different from Amazon Mechanical Turk or any crowdsourcing platform which pays people for data?

I admit I haven't seen what the contract looks like but are people suggesting that the end user still does not understand that they are providing user data to be used by Facebook in exchange for money or have no power to decline such offer?

But it's much easier for the press to run click baity titles like "Facebook spies on teens! Again!!" and many folks on HN quickly jump on the bandwagon.

That said, the part of this backlash I sympathize with is if someone hates benefiting FB and have any of their data in their hands but then has a friend who uses this service and uploads all their collected chat conversations to FB.

and no, I don't work for them.


What is "dangerous territory" about "contract literacy" when the typical Terms & Conditions for basically any app/service/website are dozens of pages of legalese?


When one facebook user messages another both users have consented to trust facebook for good or ill. The question is actually much more interesting when you consider a federated network like email.


It is the million dollar question because like wiretap laws vary by state ( one party consent, two party consent) their is nothing for things like this related to data sharing.


If I tried something like this I’d have my developer account totally revoked and all my apps pulled; but I can’t imagine that happening to Facebook. It makes sense but it’s interesting that there’s a systemic de-risking of the way big players operate while simultaneously being in a position to do the most damage.


They've had their enterprise signing certificate revoked.

https://seekingalpha.com/news/3427520-apple-banning-facebook...

> Apple says. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”


Does that mean FB's mobile device management for stuff like VPNs is now broken for their employees?


From a certain size on they need each other. Apple can’t afford to lose Facebook and Facebook can’t afford to lose apple.


God I wish Apple had the balls of suspending the developer account and withdrawing the Facebook app from the app store. It's infuriating that most likely Facebook will again get away with a slap on the wrist. How will they make the next morally relevant decision?


They should implement permission controls so users can choose what functionality the facebook app can access.

It sucks huge amounts of power, I assume the facebook app is collecting all sorts of nefarious metadata off phones at no benefit to the user. Fine grained controls would allow users to prevent facebook from accessing their information without the user's explicit choice, which would be a huge improvement.

There should be a big "Facebook wants to read and analyze ALL of your contacts. OK or Cancel" prompt before each read, or you should have to explicitly allow "Allow Facebook to read and analyze all of your SMS messages at any time"


They've taken the sensible step of revoking the Enterprise account, which is usually separate to the AppStore account.


Is that really the case though? Lets play through this virtual divorce.

Apple's revenue was a disappointing what 60 some billion dollars last quarter based on devices that have a multitude of functions beyond facebook. Not having facebook available on the device would probably decrease their value somewhat but with facebook getting tons of bad press it could work to apple and alternative social networks long term benefit by further cementing Apple as protector of your privacy and Apple blessed social networks as better than that trash android users run on their comparatively open platform. Users already don't seem to value openness much and place higher stock in privacy and security.

Facebooks was in the same time frame around 16 billion. Facebook depends heavily on access to mobile platforms to reach users. Apple is only 10ish % of the global market but around at least 1/3 of the US market which facebook needs. As people are unlikely to chuck their $1000 phones in the trash before 1-2 years are up it seems likely that most users would keep their facebook account to keep in contact with friends/family on facebook but come to rely on other chat apps on their phone and may ultimately use those apps instead of facebook on and off their phones.

From facebooks perspective its users don't immediately drop but engagement can drop through the floor for that segment while offering substantial help to competitors that may in the long run turn facebook into myspace 2.0. Apple losing facebook would be a negative. Facebook losing apple would be an existential threat. If removed from the play store as well it would basically be dead in the water. This would probably never happen because if apple and or google threatened to do so it would basically have to give apple and or google its left arm and throw in a leg to secure its continued existence. It's bargaining position with either is terrible.

I think both should work together to force facebook to do a better job of protecting its users privacy.

As an aside I think that in the longer run it could probably consider bypassing the respective app stores entirely to work entirely as a website but don't think that would be likely to be a short term prospect.


You'd be surprised.

Apple has $245 billion cash on hand. That's enough to build their own feature equivalent social network and give $20 to anyone who signs up.


Hope it works better than Ping and google plus


Building a social network is difficult even with almost infinite money. Apple has failed miserably. So has Google.


Knowing apple it would be apple device only and only let you communicate with other yuppies.


... so they allow themselves to abuse each other? Maybe enterprises should have % penalties built into their contracts here to effect the appropriate behavior if the normal rules don’t (or can’t) apply.


They don’t abuse each other but their users.


Fb is only "shutting down" it's iOS Spyware due to "bad press" not due to a sudden realisation that spying on children who are incapable of understanding the long-term consequences of the privacy invasion is fundamentally wrong/evil.

Fb only stop doing things that are obviously morally reprehensible when they get caught or called out in the media. They never proactively doing the right thing. :-(


It's not even that--they are not stopping the identical Android app, only the iOS one. From the media/public standpoint, the programs are nearly identical, so this is clearly a case of either Apple ordering them to stop and/or they know they are hosed for giving Apple the finger and trying to pull the plug before Apple revokes their entire company's enterprise certificate, which would break any internal apps (the intended use of enterprise certs) that they may be using. So it's not even the bad press, just them risking angering a company they depend on to do their business.


According to articles I saw in the last few hours, their "certificates" have been pulled. So this has likely already happened.

Also, Google updated API certificate behavior to only trust built in roots by default. https://android-developers.googleblog.com/2016/07/changes-to... might explain why "project atlas" is only available for Android devices marshmallow and earlier (they can't snoop encrypted app traffic on later versions)

https://www.betabound.com/referral-instructions-for-project-...


Nice! Glad Apple caused them some massive chaos. I'd forgotten about that roots thing, it's actually really nice albeit a little annoying for debugging/reverse engineering.


Yeah, the API change was back in 2016, somewhat close to the timing of Facebook's deployment of Onavo. The conspiracy theorist in me says Google might have got a tip about such behavior years ahead of the public revelation. SV companies are quite incestuous.

Everyone pin your certificates. If this was standard practice none of it ever could have happened.


Not really. Apple banned the app -it clearly violated their developer terms of service so they couldn’t continue with it anyway https://www.recode.net/2019/1/30/18203231/apple-says-its-ban...


Apple didn’t just ban the App, they revoked Facebook’s privileges to “side load” their internal apps, so Facebook employees can’t even order lunch or a company shuttle.


Business as usual. They’re like the bad guy in a crappy cartoon that does the same bad stuff every week.


Almost by definition, we're only going to notice the "right things" they've done that they don't get called out on (i.e. have attention drawn to beforehand).


I would love to be a fly-on-the-wall at some of their meetings when someone says “So we are going to do data collection on minors without parental consent now” and all the developers just say “great I’ll code that up straight away”. Like is there no dissent at all? Is there no one at all who says “guys this is not OK”?


There's some powerful dissonance going on. I once interviewed an engineer. We spent a good 15m of the hour-long interview talking about the moral implications of social networking, the potential for powerful technology to cause harm, and what a developer's imperatives were. He was completely on the "side of right" - must use powers for good, data harvesting is bad, Facebook is an immoral business etc.

He turned down our offer to take a job building a data analytics platform at Facebook. "Oh it was just too exciting tech not to work on."


You dodged a bullet, that guy was just telling you what you wanted to hear, lacking in moral fibre


I mean, according to the original article from techcrunch pointing out this issue there was a consent form minors had to get signed.


Click here to pinky-swear you asked your parents.


The BBC are reporting they signed up as a 14-year-old and there is no parental consent sought


Facebook officially prohibits users younger than 13, mainly because US law has a lot of special requirements for children under that age ( https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pr... ). I’m sure the app doesn’t have any special age-based code because if someone has a Facebook account, they’ve already told Facebook they’re 13 or older.



fair enough.


For many devs it is still a dream to work in SV, regardless of the company. They'll do whatever they're asked to as long as they keep living there.


Amazonians reading this: I'd like to know how often does it cause someone to rethink their proposal when you use the magic words "disagree and commit"? Does it ever happen?

I mean consider this scenario: A, B, C, D, E are in a meeting to discuss a new project that A has planned. D and E like the idea. B is not committed either way. C says they can't see the project being a good thing but uses the magic words "disagree and commit". How often does it cause A to go back and say ok maybe my idea was not good?


In reality it's more like 'agree and submit' to the manager/pm making the proposal. Don't want to do the work because you find it goes against morals or personal beliefs? That's fine, they'll pivot you out of the company real quick.


There is an assumption that developers "knew" how the code being implemented is going to be used. In a big company like FB, as an IC or team, the context under which code gets developed may be far removed from how it gets used. I can imagine a scenario where someone developed this code for testing FB app on devices, another engineer had similar need and morphed it into different product etc.

The solution to these problems is oversight from security, compliance, and privacy on all systems dealing with consumer information, and having privacy education for all employees on regular basis. GDPR is a step in that direction.


Try reading "Chaos Monkeys" for some insight.


Turns out a lot of people are willing to be unethical for a lot of money. If you can convince people to murder people and render them down to soap you can probably convince people to build spyware. You can even pretend that such data will never be used in a prejudicial fashion say to deny people employment in the future based on some social score.

Come on guys we just want to build a better ad what could possibly go wrong!


> It will apparently continue to be available for Android users.

That's another interesting move.


Just like “Onavo Protect” still is available on the Play Store - https://play.google.com/store/apps/details?id=com.onavo.spac...


Suspect its breaching some term of the app store or at least in a grey area. So it's a pre-emptive move to avoid the Apple has removed Faecbook app news headline.


On Android, they can just distribute the apk


Google really needs to clean up the Play Store of all sorts of malwares and spywares.


> The Research app requires that users install a custom root certificate, giving Facebook the ability to see users’ private messages, emails, web searches, and browsing activity.

Ouch. Yes that's really bad, even if it wasn't a violation of Apple's terms of service (it was, apparently).


So. Onavo was taken down and repackaged as Facebook Research. And now that has been taken down too.

Wonder what it will reappear as next?


Facebook NotSpywareWePromise


Followed by a

"We totally apologize for making this mistake, which is totally on me and we promise to clean up our act and behave better in the future and we're sorry if you feel offended"

apology by Mr. Zuckerberg a few month later.


"We regret to inform you the duck is spyware."

(inspired by https://knowyourmeme.com/memes/milkshake-duck)


no that's the name of their flagship app...


Part of the Facebook app.


Or rather - "Facebook was FORCED to remove its app by Apple".


Where does it say that?


It's still on android. At least this reveals that the move is not about the users.


They probably took it down because as the original app it violated the App Store’s guidelines on data collection. But they have not been forced.

The apple gate keeping here is terrible.


> They probably took it down because as the original app it violated the App Store’s guidelines on data collection.

Or maybe Apple took it down because - as you said - it violated the store guidelines. Or Apple just encouraged them to do it by themselves (for a better PR). So effectively, they were forced.


It wasn’t in the App Store. It was sideloaded from a Facebook website using a profile signed by Facebook’s enterprise developer certificate. Screenshots in various news articles confirm this.


It violated the Enterprise Developer Certificate Terms, so Apple have revoked the Developer certificate.

The rules state that Enterprise apps must only be in the hands of users who are under the supervision of the company which operates the Enteprise certificate. It's likely that Apple doesn't consider the people Facebook have co-opted into this program to qualify


According to the single information there, Apple DID NOT take it down the last time already which means it was still collecting data before Facebook removed it.

There is absolutely no information about this time. Actually according to the content it is again Facebook and not Apple who removes the app.


It might as well be so. The we’re using an enterprise distribution method which only allows distribution to company employees. This is grounds for account revocation. (Not their AppStore account, their enterprise account. )


It might or it might be just the same way it was with the other app where Facebook took it down. Just as it says in the actual article...


It doesn’t. The articles all say, in summary:

1) Facebook said to the media that they’d stop distributing it to iOS users.

2) Apple revoked Facebook’s certificates from the enterprise signing program.


If you have a choice, don't work for Facebook.


Sure hope Google will shut down theirs too...lol

https://support.google.com/audiencemeasurement/answer/757381...


"Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

https://techcrunch.com/2019/01/30/apple-bans-facebook-vpn/


I wonder if google will do the same for their "install a VPN and app outside the app store to look at what people do" app:

https://support.google.com/audiencemeasurement/answer/757389...


I really hope people remember all the ways Facebook is being scummy for when Zuckerberg decides to enter politics. We don't need more people like him in government.


I dont want this to happen. But I want to see it happen.


Would be nice if people prepare a series of questions about this for the next time Zuck is in the hot seat before Congress. Even if it doesn’t break any laws and acquired consent, this still smell so bad.


I struggle to see how software developers can willingly do work on a project that goes against all values and morales!

I have previously worked with developers in start-ups and was amazed to hear some of their backgrounds. One previously worked for a company that bundles spyware with freeware MSI products. Another worked for an airline agency, where they advertise fake a discount on tickets when the price was in fact higher than market. Needless to say, their actions outside the work environment matched their actions inside.


I struggle to see how software developers can willingly do work on a project that goes against all values and morales!

A $200k/year graduate starting salary is enough to get a lot of people to set their values to one side for a while.


> I struggle to see how software developers can willingly do work on a project that goes against all values and morales!

Personally, this doesn't really go against my morals or values. They installed the app. No one forced them to. Not everyone has the same set of morals and values you do. If it's not illegal and no one is harmed. I don't care so much.


It's the "no one is harmed" part I strongly disagree with. Perhaps they signed up for something and have perfect knowledge of how Facebook will use the information they collect (doubtful) but the rest of us will be impacted, manipulated and harmed in some way by what Facebook ultimately ends up doing with the data that their guinea pigs willingly gave up.


> It's the "no one is harmed" part I strongly disagree with.

I can't prove a negative. So you would have to prove the positive, what harm came from this app?


> what harm came from this app?

Would you say that the current political climate has been harmful? I would. I would not only say that, but I would also say that it's come about specifically because of data collection practices which have enabled data analysis of entire populations in a way which has never been seen before.

While analysis of populations is not inherently bad by itself, the outcome certainly has been, particularly because the abilities and effects are so new that there's little or no law governing people towards positive effects for society as a whole. Without those laws, the effects become geared toward benefiting people at the top either directly (company CEOs) or indirectly (capabilities sold to those who can afford them).


> Would you say that the current political climate has been harmful? I would. I would not only say that, but I would also say that it's come about specifically because of data collection practices which have enabled data analysis of entire populations in a way which has never been seen before.

Seriously, look at the state of peoples finances. That's the cause. Look at the amount of money the rich have compared to those who voted for Trump, Brexit, etc. Look at the quality of life. This has been coming for a long, long time. Blaming on Facebook and Twitter is not very insightful of the state of affairs in the world today.


You don't think the data collection policies these companies have meaningfully enabled and/or contributed to the wealth gap?


No... That would be banks and governments with decades of lax tax laws. This wasn't created overnight, 2008 just highlighted it.


Isn't Cambridge Analytica enough proof that harm has been done by Facebook's data collection and allowing others to access it? That's the very tip of the iceberg.


What harm came from that exactly? More people voted? Is voting harm? Just because you don't like the way someone voted doesn't make it harmful.

My understanding is the entire Cambridge Analytica thing is about privacy and not actual harm from the Facebook aspect.


It was about having enough data to specifically target people with messages that would influence their vote.

At scale, such behaviour cannot possibly be good for democracy.


For me, everything is about, that it could be harmful. And that's true it could be. Lots of things could be harmful. But right now, it's not really provable by anyone that it is. What we need is proper laws to control what companies can do. And in some areas of the world we're getting them. They're not great but like all tbings they will improve.

But it seems very much like there is an attitude like working for Facebook or Google is amoral. That I don't believe. Especially when there are companies sell software to track phones secretly to countries like Iran. That's on the border of amoral.


It was used to figure out which parties were most susceptible to being manipulated with misinformation and outright lies so that they would vote for people whom would ultimately harm their own interests. For example figuring out which groups of liberals could be pushed towards a third party candidate with propaganda so that that a conservative candidate could win the election and disadvantage the same folks. This is straightforward harm even if the election was still technically democratic it was swayed by manipulation partially funded by our enemies and those who aligned themselves with Americas enemies to get a president elected by the minority to harm the interests of the majority in order to enrich a much smaller minority.


>can't prove a negative.

Then why do you assert it as being truthful?

The history of FB's data collection practices is replete with "harm".


I didn't assert that anything was truthful?

And prove the "harm" in its data collection practices. Since you've been very specific in data collection, I will expect your response to be about data collection.


I'm not the original poster but the harm would logically follow from USE of that data now and in the future not the process of collection much like the harm from a long fall doesn't come from the journey but rather the sudden stop at the end.


Voter/election manipulation


I struggle to realize why we expect Software engineers to be bellwethers of morality and ethics. They are flesh and blood, and like any other profession, if there's money in something, there will be someone to do it


We expect all engineers to care and most people, who call themselves engineers take a course of ethics as part of their certification.

It’s an ongoing source of debate and confusion as to what Software ‘Engineer’ means.


Engineers do (like civil, mechanical, electrical) but most software engineers come from a CS background, where there isn’t as much focus on professional ethics.


For one, as professionals and engineers, SEs should absolutely be attuned to and deeply reflective of ethics. That said, we can and should expect everyone to consider ethics in their daily lives and work.

Someone likely will do nearly every unethical thing, but that doesn't mean it is right for anyone to do it.


Correct. There is nothing intrinsically noble about technologists. No Hippocratic oath, no unifying moral code. They’re susceptible to the same pitfalls of professional disregard as anyone. Understanding that will help you temper any unearned trust in software and technology companies.


We should expect everybody to care about the morality and ethics of their actions. That's a pretty low fucking bar IMO. In software, this is one way that manifests.


Other professions are self regulating, if you do unethical things your peers (the AMA or legal boards) will strip you of your license to practice and then you can’t find a job.

It would be too much to expect engineers to somehow be angels without the threat of punishment when even doctors need these systems.

But on the other hand, imagine if you heard about this agile thing but you can’t legally apply it because waterfall is mandated and if you apply agile you might be stripped of your license to practice software engineering.


I am not condoning the unethical practices, but every profession has people who are below the bar you talk about. The reasons why they do it are as varied as the peoples of the world.


>We should expect everybody to care about the morality and ethics of their actions.

This isn't a very meaningful statement as people widely disagree about what the ethical action is.


There was a YC funded company a few years ago that had a similar startup bundling shitware, and pg himself was on here defending them and claiming that users are “choosing” to install the software willingly. It’s astonishing how much one’s decision making framework is tied to their financial incentives. People find a way to rationalize anything if they’re getting paid, and they don’t even realize that they’re doing it.


One of the interesting ideas out there is the evolutionary development of morality, basically it exists to serve our survival, and historically has changed in the right ways to compliment the technology of the time (and the tribes which fail to update their morals die out). So that might just be a feature, not a bug.


Software developers are human beings. This analogy may seem inadequate, but don't you also struggle to see how certain human beings are willing to physically hurt or kill other human beings? This also goes against all values and morales, yet people do it. It's just that your set of values is not the same as other people's. Our world has always been a dark, cruel place, and yet for some reason a lot of western people seem to have forgotten this very apparent fact.


It's almost as if people have different morals and values...


A case like this can be sold to developers in odd ways. Devs typically think that people who download things have way more insite than they do and can assess what putting the certificate on the device means. The team that started the project likely feels that this helps them build a better experience for their users.

Facebook in general seems to have good intentions and terrible awareness of what is crossing a line. They really wanted their platform to be the center of everyone’s lives and now appear completely unprepared for the consequences of pushing that agenda so hard.


"It is difficult to get a man to understand something when his salary depends upon his not understanding it."

-- Upton Sinclair (https://en.wikipedia.org/wiki/Upton_Sinclair)


I know a reasonable number of Facebook employees. Their opinions on the last year range from “don’t care/getting paid” to “everyone does it.”

These are good people. They just don’t care.


> These are good people. They just don’t care.

I think these two statements contradict each other.


Or maybe people are a lot more complicated than simplistic notions of what makes a good person.


You should edit that to 'These are people who don't care. It's not good.'


Both comments express this sentiment. I’m pretty judgey myself, but I’ve known these people for ten, twenty years. They are good people but even good people have blind spots.


We love labels don't we? No one is purely good or purely bad. Or purely left or purely right, etc.

Like most things, these things are points along a spectrum, not absolutes. Even that is probably too simplistic - points in n dimensional space I guess :)


> Needless to say, their actions outside the work environment matched their actions inside.

It's a given assumption that anyone who once worked for a shady company is also shady?


"It is difficult to get a man to understand something when his salary depends upon his not understanding it."


To any HN readers who work at Facebook: your company is morally decrepit and you are complicit. If you go to LinkedIn and indicate that you're looking, you'll have 20 leads in your inbox by lunch. You don't have to work there.


From Google, MS, Instacart, Uber et al? The reality is that every major tech company does scummy stuff. There are vanishly few tech companies that don't.


There are a fuckload of tech companies. You don't have to work for a big name.


One day you're working for a plucky start up. The next you're working for a big name that's stealing tip money.

There's a reason that these companies do what they do. Fraud, breaking the law, and dark patterns are way easier ways to make money than doing so do ethically and legally. Companies either die a hero or live long enough to become a villain.

I'm exaggerating a bit, but realistically the list of ethical tech companies is a lot shorter than the list of unethical ones.


This really isn't true. The vast majority of tech companies are ethical, at least a far sight more so than Facebook. They don't make headlines so you have a bais.


Glad you’re a moral bastion but I gotta pay the bills and big companies have better work life balance than unicorn startups.


There are lots of ethical companies which are in the middle, too. But that's okay, your moral lackings are definitely excused by your lazy approach to job hunting.


Whataboutism doesn’t help.

There are levels and degrees of bad behavior.

Facebook is rotten from the head down.


Maybe it's just me but I don't think Facebook is particularly bad. They acquire user data and sell it to anyone who wants it. They should better limit the data they collect and who they sell it to. But compared with stealing tip money or literally killing someone? It's just not nearly as bad.


Stealing is bad, but limited in scope. A number of people lost some money.

The data collection gives power to those who hold the data. Having all data on someone gives absolute power. It can be both looking at intimate data to directly manipulate or dominate a person, or apply data science to an incredible detailed corpus of data, and in the future learn things about individuals and groups that we can't even fathom.


What about their dubious involvement in recent elections does that bother you?

> literally killing someone

You’ve presumably not read about Myanmar or fbs involvement in organising genocide?


>What about their dubious involvement in recent elections does that bother

It mostly bothers because it seems like a great deal of the recent hate towards FB is because people blame them for Trump. If Clinton wins I don't think people care.

That said, yes I think FB should do a better job of "know your customer" when selling ads. Obviously they shouldn't sell ads to Russian intelligence. But it doesn't bother me all that much. For one, I don't think it had that big of an impact. For two, it's a dangerous path to go down when you start questioning if your political opponents have the right justifications for their votes.

>You’ve presumably not read about Myanmar or fbs involvement in organising genocide?

Which part of that involved a FB employee going out and literally killing someone?


I was mainly thinking of Brexit, followed by US elections.

In relation to fb, do you see any problem with what happened in Myanmar? They allowed the organising of a genocide.


Disappointed to not see Apple putting their foot down more forcefully on this. It's possible they spoke to them, but there should be repercussions beyond just "shut down this one app".

EDIT: https://seekingalpha.com/news/3427520-apple-banning-facebook...


F500 install certificates like this on their employee owned devices.

If FB hires these people, is there a difference?

Do you think a random admin assistant knows more than a techy teenager about the dangers of data privacy? Or is she just trying to scrape by for another week of rent?

This sounds like alarm from the rich and well to do about an ‘immoral’ way for the poor to make money.

Interesting nonetheless.


Yeah Facebook definitely just shut it down and walked away, definitely.

No harm done...


"Sorry we got caught"


It seems like we can't even go a week without Facebook getting caught doing something scummy. It makes you wonder what else they are doing. I assume the worst.


“... again”


[flagged]


What Facebook, politicians, and many others fail to understand is that there's a difference between legal and morally right.

Facebook makes it exceedingly easy for people to hate them, while technically doing nothing wrong. Well sometimes they break the law, but then Zuckerberg just says he's sorry and we all move on.

It's getting tiresome to watch our industry behave recklessly in the pursuit of ad revenue and the push to collect more and more data about the unsuspecting public.

I enjoy my work in IT, but companies like Facebook, and increasingly Google and Apple, then entire IoT business and a ton of startups are increasingly pushing me towards being less connected, less interested in new technology. It seems like a huge chunk of industry is misguided. I'm sick and tired of it.


I am slowly turning into a Luddite. Whenever new devices come out I immediately wonder how they are used to spy on people as main purpose and helping people is secondary.

The ad-supported business model really has corrupted tech to the core.


To be fair, the said app clearly told people what exact data would be collected and how it will be used. If people does want and chose to sell their data, is that still morally corrupt for the company collecting the data?

In my experience large majority of non-tech people do not care much about their data being collected and would be happily accept monetary reward for things like their location history, usage stats etc. Assume that data collector is using this information to show you relevant ad. I wonder why would it be so much bad to see relevant ad then irrelevant ad? It's not that you are going to stop seeing ads anytime soon.

So the question is: If data producer is happy to sell the information and data collector is using it to enhance the user experience (i.e. no evil intent), would you still consider this bad or immoral?


> To be fair, the said app clearly told people what exact data would be collected and how it will be used.

Did it explain the consequences of installing a root certificate through?


>legal and morally right

I guess the first question is, if I do something illegal, but either avoid being captured or manage to otherwise avoid prosecution by using money or influence, have I done something wrong, legally speaking?

Because there is an argument to be made that making money off of children's consent to contracts is legally wrong, but because of the widespread nature of doing such by those who have enough political power to impact how laws are enforced, the laws are not enforced to protect children. That we aren't correctly enforcing laws that protect taking advantage of a child by using consent that can't legally exist does not make the action legally right, or if it does, then we are effectively invoking the notion of "it isn't illegal if you don't get caught".


What you fail to understand is that the average layman doesn't care about data collection and ad tech. If it pisses you off that people don't care about the same things you do, welcome to politics.


What I care about are companies that simply have no moral compass. If no one at Google or Facebook think that they're pushing a little too hard to get ever more data about their users, then something is terribly wrong at those companies.

Google is particularly fascinating, their employees protest when their parent company want to be a supplier to the US military, but apparently they have no moral objection to building detailed profiles on users (and non-users) in order to sell them more junk that they don't need and can't afford.

The whole data collection thing is so abstract that the same people who don't want their employer to kill people with drone can't see the immorality of invading people privacy. If that's the case, then how can we expect the layman to understand or care?


[flagged]


Funny you are accusing those employees of nefarious political motivations when you are pushing your politics up and down the thread (and others) in your sub-day-old anonymous account.


I get banned every day, else my account would be like 10 years old, haha.

I am not accusing them of "nefarious" political motivations, I respect their right to push their own politics. I wish they were more transparent, though.


> the average layman doesn't care about data collection and ad tech

True. But Facebook has managed—by demonstrating a repeated recklessness, unwillingness to reform and immortality that seems fundamental to its culture—to energise a vocal minority in a way no other tech company has. This vocal minority, moreover, is wealthy, politically connected and bipartisan.

This is already resulting in costs and reduced strategic flexibility for Facebook’s senior leadership. I expect it to turn into an existential threat for the company, as it’s currently organised, in the coming years.


There is a difference between not informed, and doesn't care.

How many people would be comfortable if you laid out the full spectrum of what is possible with the data that is collected, and how said data is distributed and sold?

Its frankly a massively dishonest claim that people "don't care so its all fine".


Users realise that the ads they see are targeted to them, and infer that those ads are chosen depending on their activity. Do you think people would stop using Facebook if you told them about this? Or maybe you're wishing for the government to step in?


> Users realise that the ads they see are targeted to them, and infer that those ads are chosen depending on their activity.

My experience is that most people are surprised and mildly horrified that Facebook has most of their browsing history.


Can't Facebook at least pay for effective reputation management?

All those new accounts accusing critics of hating Facebook are really rather ineffective.


Sheryl should get the "definers" to cook up some new Soros conspiracy material to use as ammo.


They were caught violating the terms of Apple's Developer Enterprise Program.


Why do people defend large corporations so intensively?


Because the dream is still to get a job at one of them, for better or worse.


They are using enterprise distribution against Apple EULA, sending software which was forbidden on the App Store.


Not if it involves minors without parental consent it isn’t.


well that was pretty fast..

less than 8 hours after the original post hits #1 on HN.


Could Apple just decide to block FB altogether?

Could Google Play?


Could Verizon block the iPhone from their network?


I have no idea about the technical feasibility or the legality.

But Apple surely could remove Fb from the App Store.


Could an admin explain why the original news item, previously the highest voted of the first page, was pushed to the third page suddenly? Just wondering if it was algorithmic or manually changed.


Do you mean this post: https://news.ycombinator.com/item?id=19031055? The standard here is to have one submission about a particular news story at a time on the front page. That one, now 15 hours old, is near the top of the second page, while this one is on the front page.


Without wanting to be too offtopic, I assume FB employees are waking up. Isn't there an office in new york? Probably flagged to death.


HN also heavily penalises posts which have a high number of comments.


Right, but relative to the number of upvotes the submission does not have that many comments. And it fell just on the start of the workday (right?), see http://hnrankings.info/19031055/.

I am aware that's not a wanted discussion topic normally, but if behaviour related to an article about dirty behaviour smells dirty it feels a bit relevant.


It's best if you email us at hn@ycombinator.com, because we can look at the actual data. This is mentioned in the guidelines: https://news.ycombinator.com/newsguidelines.html.


Why is that?


To avoid controversial topics and flame wars (and likely to prevent heavy load on the servers since large threads cause significant strain on HN it seems)


I have seen this as well with controversial topics that make a major tech firm look bad. Specifically the third page as well.


As context the HN submission of the techcrunch article, for those who missed it: https://news.ycombinator.com/item?id=19031055. Strangely just vanished from the top position.


The standard here is one submission per news story on the front page. We've included this link at the top, thanks!


“Vanished”? It’s still in the top 15 as I comment here.


See http://hnrankings.info/19031055/, for a moment it was on page 3, before it was #1. Happy to see it's back up a bit.


I wouldn't be surprised if people have been flagging it. Back when FB was getting slammed for the Definers controversy among other ongoing controversies, multiple commenters were bemoaning HN's purported collective hate against FB. That said, makes sense for that thread to move down while this current discussion stays near the top, as this discussion contains the newest developments/fallout.

(in contrast, the similarly highly-upvoted Facetime bug thread [0] seemed to stay up longer, but no official fix had been made in the ~18 hours since it was first discussed)

0: https://news.ycombinator.com/item?id=19022353


It's not controversial, it's dastardly.

Facebook will shut down its c̶o̶n̶t̶r̶o̶v̶e̶r̶s̶i̶a̶l̶ dastardly market research app for iOS


And Apple just revoked Facebook's developer certificate. Nice.


Enterprise developer certificate, which is a different program than the standard Apple Developer program used to release apps on TestFlight and the App Store.


It's a disciplinary action for outrageous behavior.


> Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.

"Your honor, I swear! He clicked on the button saying he was over 18!!! How was my porn site supposed to know he was lying?"

If Facebook wanted to do something useful, it could devote some of that $400 billion empire into identity verification and ways to prevent deceptive practices online. Something that goes beyond the Stone Age practices like SSN and Captchas that we rely on today.


Remember when putting a picture of yourself on the internet or even using your real name was controversial? The old generation warned not to do it, ever! But teens did it widely and profusely.

Now the new controversy is data. Don't ever sell your data! Don't give up your data! But teens are doing it widely and profusely.

At some point you just have to accept that new people will be born and will have new ideas and just won't give a fuck. As a business you can either sit on the sidelines and watch or capitalize on it.

Facebook will lead the way to allow for smaller players to get away with doing this as well.


>Facebook will lead the way to allow for smaller players to get away with doing this as well.

That's not how any of this works. It's hard to 'move fast and break things' when a company like Facebook has used all the loopholes and gotten caught.

Small players will never be able to replicate Facebook or their scummy tactics because at this point Facebook wants regulations. Regulations will ensure Facebook can and will continue to operate (can pay any fine) while the competition is hamstrung. If your competition is hamstrung it makes it easier to buy em up or wipe em out.

Just to come full circle Facebook purchased onvo so they could spy on users and figure out which apps they were using and how often. Facebook then used this data to buy up companies.

https://www.fool.com/investing/2018/12/05/facebooks-onavo-sp...


This is the slow break down of society. Let's not cheer that on, shall we.


This is literally what every generation seems to say about the behavior and habits of the next generation. I think it's pretty clear "break down" just means changes you don't like.


You haven't gotten to the point of proving that the "teens" would be correct to disregard their privacy, or that the "old generation" would be wrong to care about it.

I don't believe the behavior you're describing actually falls along generational lines, though, rather than technical competence or awareness. It isn't only "teens" who use social media, nor is it only "old people" who are concerned about privacy. In fact, younger generations are leaving Facebook and social media because of privacy concerns and the negative effect it has on their lives.

Your arguments here are stereotypical ageism and lack enough nuance to be convincing.


Meanwhile... Google, Microsoft and Apple* _receive_ money from their users to harvest the same personal data that Facebook is willing to pay their users for and nobody bats an eyelid.

* Apple is probably the least guilty here, however as long as their OS remains closed, we don't actually know how much personal data they are harvesting.


You can easily verify using Wireshark on a router, that neither Google, Microsoft or Apple are redirecting ALL INTERNET TRAFFIC through their servers.

This is quite a step up from sending home some telemetry data.


It appears that you edited your comment above to make my "Wireshark won't decrypt encrypted traffic" comment look stupid. Nice try, so let me clarify...

Facebook's market research app isn't "redirecting ALL INTERNET TRAFFIC through their servers" either. If you read the article it states that the app monitors phone and web activity and sends it back to Facebook. This is not really any different from sending home telemetry data. Google, Microsoft and Apple all send home encrypted data, most of which is not verifiable using Wireshark.

I'm no fan of Facebook, far from it in fact, but at least this is an opt-in service and users are being compensated for sacrificing personal data. The same can't be said for the other three companies.


Wireshark won't decrypt encrypted traffic.


But it can easily tell you where the traffic is heading.


Indeed. The parent poster edited his comment which meant that my comment did not make any sense. See my other comment for clarification.


What a bullsh*t. People got some money for giving up some privacy. Nobody was forced to do this. And now everybody is freaking about a fairly reasonable trade.

What's next? People will start harassing students/researchers that do paid studies?


Certainly nobody was forced to do it, but it is more than likely that nobody understood what they were being asked to do. Not only that, children were being targeted by this scheme. Lastly, the implementation on iOS circumvented rules relating to an app distribution which shows Facebook’s proclivity for flouting the rules.

Pushing the app distribution thing aside, being aboveboard in every other respect is still insufficient justification for ethically questionable processes.

Facebook has a history of unscrupulous untrustworthiness which should not be overlooked when examining the implications of the scheme, particularly the requirement to install a root certificate. To ignore the context of the polemic, to pretend Facebook is just another company rather than one of the largest collectors of personal, private information on the planet, multiple times caught invading peoples' privacy through less than honourable means, is foolhardy at best and dangerous at worst.


Well, I am not a big fan of telling people what to do or what not to do. Do you want to sell your private info to some corps? Feel free to do so.

Will it blow up in your face? Perhaps. I could not care less. It's your problem. You didn't know what you were being asked for? Again - it's your problem.

People agreed to do that on their own, they got paid for that, and, most likely, they don't care if FB knows what kind of porn do they browse.

All the stuff about ethics and "honourable means" is irrelevant in this argument. Is war ethical? Is spying honourable? Depends on whom you ask.

>requirement to install a root certificate

Requirement? You can just tell them to f*ck off.

In any case, I could not care less about this, but what annoys me is the people that pretend to be super-nannies that gonna save the world by telling what the others should do. Through history, this has never worked.


> Do you want to sell your private data to corporations? Feel free to do so

That isn't the point. The point is that Facebook preyed on technical illiteracy and a general and widespread lack of understanding of the implications of participating in the program. Effectively, the majority of participants were tricked. The age range for participants also included people who were not of age, and therefore not legally responsible for their actions. Facebook must accept that responsibility.

The point is that Facebook flouted Apple's guidelines for the distribution of apps outside the App Store, showing a blatant disregard for the protections put in place to protect consumers from bad actors. Facebook has positioned itself as a bad actor through their actions, not only in the questionable collection of data the implications of which the users will likely be unaware, but also in how such a program was distributed: in direct violation of the protections offered by the App Store.

The point is not about the choices made by the end users, but rather the unscrupulousness of a big company that knows better — but has done this before, far too many times.

Finally, the very nature of online interactions in the modern era means that people aren't just signing away their own privacy but also, to a lesser extent, the privacy of those with whom they interact. Facebook is perfectly aware of this potentiality, but users are not and, on the whole, will never be because it isn't their job to understand the technical dimensions of online communication. A big company like Facebook, however, does know and should have behaved accordingly.

> [...] is irrelevant in this argument. Is war ethical? Is spying honourable?

What do these two examples have to do with the actual circumstance? "Market research" is not war, and it certainly should never be construed as synonymous with spying. Do not construct a strawman against which to argue, it demeans your argument.

> Requirement? You can just tell them to fck off.

You could, but then you would not be adequately participating in the research. You would not be using the VPN as described. You would not earn the $20. There would be no discussion.

Of course, you're failing to account for the fact that none of the participants will have had the privacy and security implications of the root certificate explained to them in a way that made sense to them. They'll have simply followed instructions to get their money.

I do not think that people should ever be blamed for being deceived as to the severity of their actions in situations such as this; a big company like Facebook does not escape scrutiny here. Clearly you believe differently, although the downvotes will tell you how well-received such a laissez-faire attitude to other peoples' private lives and preying on their technical ignorance is seen, so I shan't bother to comment any further.

> what annoys me is the people that pretend to be super-nannies that gonna save the world by telling what the others should do

Thankfully, that's not the situation at all*, and I fear you're simply projecting some negative feelings on to this article in order to justify having an unjustifiable gripe.

Here's what's happening:

- Facebook previously had a VPN service that it advertised as being for market research purposes. It was removed from the App Store.

- Facebook then started using Apple's alternative app distribution method intended only for use in enterprise situations, not for the general public.

- They were found out.

- Facebook voluntarily ended the program for iOS users.

- Apple revoked Facebook's certificate as punishment for flouting the rules.

Who is tell whom to do what, here? Facebook did many things wrong, were found out, and were punished appropriately. The technical details of their actions were analysed and found to be vastly overstepping their bounds, yet in step with their continual and repetitive breaches of personal privacy.

There's nothing more to it than that, so put the strawman back on the farm where it belongs.


One of the more important issues here is that after the whole Onavo thing already got them into hot water, Facebook completely flaunted Apple's clear rules re: enterprise certificates, and snuck in AGAIN through the back door.

I'm very interested to see what Apple's response is going to be. I'd not be shocked (in fact I'd be delighted) to see them penalize FB in some way, perhaps suspend their App Store account or something.



I think you might underestimate how seriously universities take research ethics, these days. Ever since the late 1970s or so. I doubt Facebook’s actions would have gotten past a typical university ethics board, which would be required before the study could proceed.


That's rather charitable, comparing this project by Facebook to an academic study.


Did you mean to reply to the parent comment?


With the replication crisis hitting ~half of publications, there is still work to do.


And now everybody is freaking about a fairly reasonable trade

OK let’s imagine that you have a close friend or family member with some confidential issue - maybe an illness, maybe debt, maybe they are in the closet. Occasionally they message you, on old -fashioned SMS or email mentioning something about it.

How many dollars is a reasonable trade to tell a data collection agency everything you know so they can add it to their file on your friend/relative?


Probably a lot more than $20/month. But it certainly has a price. What FB is doing is definitely scummy, but if individuals are disclosed of the risks and exactly what the app does, I think they should be able to make this trade if they want to.

I probably wouldn't, unless it was in excess of maybe >$1000/month. And even then I'd probably just get a new phone. But people should have the right to sign contracts, even if they seem exploitative, as long as they are aware of what they are agreeing too.

The main problem it seems here is that a lot of the people were underage.


Good point. Someone else essentially gets paid to compromise your privacy without your consent.


I mean, if you tell me something, I can legally decide to tell someone else. Unless we signed a contract or something.


Actually, in some states, to record conversations, two-party consent is necessary. 11 states, including California, require two-party consent. It’s a fair assumption that an app that is vacuuming up everything someone does on a phone is potentially gathering data that would fall under two-party consent laws. Also eavesdropping is also a potential crime under common law. It’s a murky legal area in this case, but one that certainly has some merit.


Sure, but maintenance of the confidence of a friend shouldn’t need a law.


Well, I pity the fools that assume that their communications are private when they are using SMS, FB Messenger, or Hangouts. If you want your info to stay private, do not send it to untrustworthy parties via untrustworthy means.

Even if you stick with apps like Telegram or Wire (my choice), you have to have in mind that your phone might have a keylogger on (looking at Xiomi and Huawei).


If you want your info to stay private, do not send it to untrustworthy parties via untrustworthy means

Well there’s the rub isn’t it. I don’t think most people would consciously decide to rat out their friends secrets for $20 - or indeed for any price. But somehow, it’s happening.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: