Hacker News new | past | comments | ask | show | jobs | submit login

For anyone who has ever worked at a FAANG like company in the last decade, yes, this is actually very hard to believe.

Despite the shady image they have, these companies go to great lengths to avoid doing shady things (because ultimately it’s bad for business). Not to mention the hundreds of tech employees that would have to be involved and keep quiet in this type of “conspiracy”. It’s incredibly unlikely, I truly believe that.




I can imagine you haven't been involved in anything illegal, but I'm sure you've aware of Meta's documented track record of coordinated illegal actions. Do engineering teams just fall head first into a bucket of 2FA phone numbers and start using the data for ad targeting, and nobody bats an eye from the legal department to product managers? Or are they hypnotized to build services for biometric data collection without consent? Nobody does anything nefarious, but their collective actions which benefit the company just end up being illegal, again and again?

The tech companies you work for do often engage in illegal activities, and some of your collegues are complicit. I'm sure it is an uncomfortable thought for some of you, but this is all part of the public record.


I think there's a natural bias people have to want to NOT see the bad in the organizations that pay them $$$$.

This is certainly true in a lot of finance.


I completely agree (as another employee of FAANG). It's ridiculously hard to do anything against policy once it's set, and trust me, the policies are set. Media overplays a lot of things which aren't just there.

The sad reality is people are very predictable, even with basic data.


The employees obviously are told the functions and APIs that they are implementing have a completely legit use case. That is not hard to believe at all and was the case in Cambridge Analytica scandal, for example.


"bad for business" leads to systems that do unexpected things. For instance, on-device generate identifiers for any image sent, and send the identifier out-of-band. This helps catch child pornography.

I can imagine the same thing done for text. The text might be encrypted, but interest keywords might be generated on-device and sent out-of-band.


The PRISM "conspiracy" was very shady and involved probably hundreds of employees. And if they have hushed people punching holes for the government, it's not crazy to think some data could leak out into other parts of their pipelines too.

I'm not claiming this is real, but I agree with GP.


Let me start by saying I have no idea if Facebook is reading my encrypted messages or whatever. However, I will say that in my experience, whether something is bad for business if it gets discovered is usually not a concern for large corporations, if the thing being done makes them more money. Because everything is just a balance sheet.

For an example from non-FAANG companies, see illegal dumping of toxic waste by chemical companies, such as DuPont and PFOAs [1]. Despite knowing what they did was illegal, the math works out -- products with PFOAs were something like $1 billion in annual profit, and even when they got caught the fines and legals were a fraction of that, spread out over many years.

So I personally believe these companies 100% would do shady shit if it increases their profit margins. And why wouldn't they? There is no room for morals in capitalism, and the drawbacks are slim.

[1]https://www.nytimes.com/2016/01/10/magazine/the-lawyer-who-b...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: