Hacker News new | past | comments | ask | show | jobs | submit login
Facebook has struggled to hire talent since the Cambridge Analytica scandal (cnbc.com)
634 points by Despegar 5 months ago | hide | past | web | favorite | 423 comments



I just spent three months hiring in NYC, and now that I think about it, I haven't seen a single person mention they were considering counteroffers from Facebook. For context, Facebook and Google are the two largest tech companies with a significant NYC presence. It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

> Usually half of the close is done for recruiters with the brand Facebook has

I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."

It's no wonder the substantial portion of people who care about their employer's ethics are turned off.


There is also an issue with the 'evaporative' effect. If no one who works there is seen as 'ethical', then you'd expect the people that do work there to be unethical/dubious. So, trying to get a promotion is then more cut-throat, the lunch crew has a few more 'jerks', the HR is a bit more biting, etc. Your hackels get raised and you are more suspicious of the motivations (however begnin) of others. Better to just not get involved.


Sheryl hired the swiftboat campaigners to stop Congress. Finding out about that made me assume that over time most of their employees would trend in the opposite direction of optimistic.


I wonder if the exec team realizes that the ad-tech industry had their Great Financial Crisis. Nobody is in love with them anymore. They'll get as much reception from politicians as Wall Street did when Congress passed Dodd-Frank. Banks don't earn much more than their cost of capital anymore.


> Banks don't earn much more than their cost of capital anymore.

This doesn't sound right. Can you add some citation or detail?


Goldman Sachs's return on equity used to average 20-30% before the crisis. Now a decade after the crisis they're glad to be doing more than 10%.

https://i.imgur.com/gtE05WX.png


I’m on mobile so I can’t read the numbers on the excel screenshot you provided but the historic high return on average equity for banks[1] (not including brokerage) was 16.29% in 1999. At last measure it was 11.85%. Dodd-Frank was merely a speed bump. The vast majority of banks have long since recovered from the crisis.

[1] https://fred.stlouisfed.org/series/USROE


That's not a good chart for banks. Their cost of capital is 8-10%, doing 11 or 12% is pretty shitty compared to the pre-crisis era.

And the smaller community banks have less onerous regulations than the big ones. Banks have to be much more capitalized and have less leverage because of Dodd-Frank. That's why the return on equity is mediocre.


While I'm not personally keen on him, Trump's admin rolled back CCAR to every 4 years, for small book every 6. Expect lower regulatory costs.


Whats the app you used to get so much history


FactSet


It isn't right. The majority of banks are profitable, the industry is somewhere around $200B/year of profit (I think that's just retail banking, not including brokerages and stuff).

This is historically high.


Wow! Source?


https://www.nytimes.com/2018/11/14/technology/facebook-data-...

The person who runs Definers Public Affairs, Matt Rhoades was the opposition research director for Bush/Cheney 04.

https://en.wikipedia.org/wiki/Definers_Public_Affairs

https://definersdc.com/team/matt-rhoades/

> In 2004, he played a critical role in President George W. Bush’s winning re-election campaign, serving as the Research Director for Bush-Cheney ’04, where he helped develop the campaign’s opposition research, message development and rapid response operations.

The oppo research director of the campaign can have no official links to the 527 group, of course. But the group exists for one reason only - to discredit political opponents.


Sheryl Sandberg and Zuck are a team. They are aligned in some bizarre goal that seems to function as a juggernaut that I got into tech specifically to avoid.


There’s also the reputation impact as well. When all of the bad things at Uber eventually became public, Uber engineers started reporting difficulty in getting new jobs. Apparently hiring managers assumed, perhaps correctly, that anyone who stuck it out at a toxic place that long was possibly the source of toxicity themselves.


If you lie down with dogs, you get up with fleas.


Can confirm. Worked there. Deeply disenchanted with the ethics of many people especially in the product, marketing, and (as you would guess) senior leadership groups.


Thanks for the input! I know you're on a throwaway account, but any stories or context for a Friday morning?


> "It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook."

intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.


>google is a bigger concern for privacy and personal liberty

Big claim. Any proofs?

With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

Extremely worrisome if you prefer to have people elected through a democratic process that is based on the discussion. This is not like understanding what people want or how do they think through big data analysis but manufacturing it.

Sure - provocations and lies are not new, it was always the case for politics but with a social media everything is at scale and everything is happening violently.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

(EU citizen here): I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process. If that's not feasible, then the second best option is, indeed, that they provide the same service to all candidates, no matter who those candidates are.

Am I getting it right that you'd prefer they pick some candidates to help in detriment of others? Because that option does not sound very healthy to me, personally.


My ideal system is to eliminate all private money from elections. You as a candidate are given a stipend by the FEC at the beginning of the campaign season, the same amount as any other candidate for the same office, and you are free to spend it. You ran out? Tough. See you next election cycle.

Nobody is allowed to give you money or anything else of value (including free airtime) -- not individuals, not companies, just the FEC. Anything else is bribery, and a crime. That way, you're not going to do things just to please your benefactors and get you an edge over your opponent next time, as your war budget is already accounted for. Instead you can focus on doing what's best for the people.


If you really want money out of politics then you replace all of them with sortition (assemblies of the people). They are representative, being drawn at random, for the whole population and they are not elected, so they don't need to campaign. Similar to politicians, they need to be supported by experts and advisors in the specific topic they are working on. I'd rather trust a group of random people deliberating than a bunch of professional liars. Sortition is a way for people to participate in democracy more than voting once every couple of years and posting on FB.


My approach is different, two houses (assemblies) one elected and one via sortition.

Elected can propose and pass but sortition house can block.

Basically the commons/lords setup but with the lord's replaced with random people.

That way the politicians answer directly to the people (or a random sample) of them.


I have been thinking exactly the same. I now have to wonder if you work at Facebook and have "read my mind" via my likes… ;-)

I been musing on whether there should be some small barriers to joining, as there is in jury service - perhaps the elected get to oppose a certain number of candidates, or they must pass a civics/governance test first so at least they've some technical knowledge going in. I can see that being twisted into something bad though.

Much as I dislike the Lords Spiritual in the current system, I wonder if the sortition should embrace it and be a "tulip farm" of certain interest groups e.g. 20 each for religion, business, justice, commoners etc, as then there's a definite base of understanding in important areas.

However it would be arranged it'd be hard for it to be worse than having the Lords full of lords though.


OK, what if people with money want to spend that money on political speech _without_ coordinating with the candidate? That's the Citizens United problem.

There's no quid pro quo bribery, but if the NRA spends a bazillion dollars attacking your opponent but not you, it'd be hard to say there's no influence on your decision making process. At the same time, it's really tricky to ban. Is something like Michael Moore's Farenheit 9/11 a form of political advertising?


> Is something like Michael Moore's Farenheit 9/11 a form of political advertising?

Not only that, is CNN or Fox News coverage of a sitting politician who is running for reelection a form of political advertising? How they choose to report stories -- and which stories they choose to report -- can certainly affect how the voters view the candidates.

Probably the best solution is to just have a signature requirement where if you get that many signatures, the government gives your campaign an amount of money equal to the average amount of private money raised by successful candidates running for the same level of office in the previous election.

Then the average privately-funded campaign will have twice that much (if they get the signatures too), but a factor of two isn't huge here. It's more of a threshold situation where once you reach a saturation point it's diminishing returns. Get the candidate to that point with public money and the value of trading legislation for private money would be much diminished.

Of course, you still have the problem that too many people vote for who cable news tells them to.


This is reasonably similar, AFAIK, to how it works in France (and I assume many other developed countries).

Campaigns do spend their own money but it is capped at some low value to even the playing field, and they also receive public funding. TV stations are required to give equal time to all candidates (in the 2007 election there were 12 candidates, only 3 of which had any realistic chance of winning, but major TV stations spent equal time interviewing all of them).


> Nobody is allowed to give you money or anything else of value

That sounds like a form of extreme boycott - and however I despise politicians in general, subjecting them to essentially expulsion from the society (at least temporarily until the election ends) and complete gag order and media blackout (because otherwise I could promote a candidate without giving them money directly - I would just publish ads under my own name but would be praising the candidate in them) just for wanting to be elected seems a bit extreme to me. Not to say at least in the US it's probably incompatible with at least half of the constitutional amendments.


Well you'd have to toss out the 1st Amendment to get that idea off the ground.


The courts have already supported certain restrictions to free speech, so it doesn't require a wholesale toss of the 1st.


Yes, the idea that campaign donations = speech is a new one as well. Tbh I think we need a constitutional convention to really solve all the problems with the American political system.



And what if a candidate wants to spend that stipend on Facebook ads. Is Facebook not allowed to have a salesperson take that money and sell ads to the candidate?


What does free airtime mean in this context? People used that term a lot about the coverage of Trump during the 2016 election, but forbidding news outlets to cover a candidate is obviously absurd.


Speaking as a U.S. citizen, your thoughts were exactly my reaction to that comment.


> I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process.

Do you mean corporations being banned from providing services to any electoral campaign? Probably not, because in that case election campaign would be impossible. If so, then Facebook would be free to provide promotion services to any political campaign too - they are service provider as any other.


> because in that case election campaign would be impossible

Would that be a bad thing?

I would love an election that was simply an announcement of the election date and a website to see the candidates and their politics. The only campaigning would then be the government campaigning to get people to vote.


> Would that be a bad thing?

If you want to have elections, yes. If you prefer hereditary monarchy, then you'd be fine.


With Facebook, it's easy for an average individual to leave the platform for good: stop using Fb/Insta/Whatsapp and install something like Privacy Badger to avoid tracking on all the other sites that have some form of Fb integration.

Leaving Google, by contrast, is way more difficult, their ecosystem reaches literally every corner of the web and you have to deal with it even if you don't consciously use any Google product, for example if Recaptcha doesn't like you, everyday online tasks like paying public school fees [1] or signing up to an online forum become much harder. Another example is Amp, where the fact that you are reading an article hosted into Google infrastructure is often hidden from you, there are many more examples. Trying to quit Google feels like that episode of Black Mirror where that woman is ostracised by everyone because she doesn't have the same cybernetic implant that everyone else is using. Just because Google hasn't been caught in any scandal comparable to the Cambridge Analytica one, it doesn't mean that it's OK for them to have so much unchecked power.

[1] see my submission history for details


It's getting a little wearying to have to rehearse the ways in which Google is a threat to privacy. But let's get the band together one more time:

Google runs search and email for essentially the entire web, controls the market dominant browser and mobile OS, has tracking scripts on >75% of the top million websites and runs a fair amount of the internet's infrastructure. It is the senior partner in the online advertising duopoly (together with Facebook) and runs one of the three major cloud computing services. It has also become the de facto standards authority for the internet and runs a massive continuous operation to collect photos of every street on the planet, which it is now expanding into interior spaces. It sells always-on microphones for the home, as well as a line of internet-connected home appliances. It does so much invasive stuff that I've probably forgotten half of it here.

So it's neither a big or controversial claim in 2019 to point out that Google has unique breadth of visibility into both the physical world, and anything that touches a connected device.


No one disputes that Google has its tentacles in many pots -- and definitely needs to be kept on a leash. But the claim was that Google was not just a matter of concern -- somehow a clearly bigger threat than FB.

Can anyone provide substantiation for that claim?


That seems obvious. If you don't use Facebook you're pretty much outside of the Facebook tracking network with a few exceptions wrt Facebook cookie tracking which you can kill with a browser plugin like Facebook Disconnect. With Google, the tracking surface area is orders of magnitude more ubiquitous - everything from Search to YouTube to Chrome to Email to Android and on and on. Facebook is almost (but not quite) negligible in comparison.


> If you don't use Facebook you're pretty much outside of the Facebook tracking network

Not if any of your friends use Facebook.


That's a pretty considerable exaggeration. There is a big difference between "I have friends that use Facebook" and "I have friends who take pictures of me and upload them to Facebook" or "I have friends that upload their contact list to Facebook" and even then, the amount of data that Facebook can extract from you in that way is pretty minimal relative to just about any other activity people commonly engage in online.


Google also sees pretty much every website visit for every website in the world, through Google Analytics.


"Orders or magnitude" (plural) means something on the order of 100x.

Which suggest to me that either you're either exaggerating quite a bit - or you were using the term without quite knowing what it means. (Which something more specific than simply "a lot").


lol, I know what the term means, thanks.

Anyway, a 100x tracking surface area is pretty accurate and not an exaggeration in the least; if anything it is too conservative of an estimate. Just Android, search and analytics on their own are easily 100x the tracking surface area of everything Facebook does, that's without considering:

gmail, home, docs, amp, drive, maps, hangouts, chrome, chrome os, messages, voice, ads, gcp, youtube, firebase, music, waze, play-store, places, wallet, domains, duo and so many more. I don't understand how this isn't completely obvious.


I don't understand how this isn't completely obvious.

Because you're living in a bubble, and have grown accustomed to thinking that everyone else is using the same lenses to view the world as you are.


The "bubble" is called reality; I elaborated on my point with detailed reasoning and all you've done is throw around insults like a troll. No point in continuing this discussion any further. Have a nice day.


I elaborated on my point with detailed reasoning

It was extremely hand-wavy, actually.

Whether you take that as an insult or not is up to you.


https://energycommerce.house.gov/sites/democrats.energycomme...

https://www.eff.org/deeplinks/2019/04/googles-sensorvault-ca...

Sensorvault:

“includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

US law enforcement had been regularly accessing Sensorvault user data in a dragnet-like fashion to obtain location details for hundreds or thousands of users


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

I have become more and more convinced that this is Facebook's real business model; that enabling instances of archetype CambridgeAnalytica is the purpose the company actually exists for.


Just look at the amount of personal information Google knows/records about you. Your search history, web stats through Chrome, location history through Android, with whom you exchange emails if you're using GMail, which sites you visit and how long you stay on them through Google Analytics, probably online purchases with a combination of AdSense/AdWords & Analytics, everything you watch on YouTube etc.

They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.


> The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

This is the whole point though, is it not? As far as we know, Google treats the data they collect more thoughtfully and responsibly than Facebook. And so they are (rightly or not) viewed as less of a threat to the public good.

Of course, they could just be better at hiding their abuse of our data... But that's a conspiracy theory, not a matter of public record like the Cambridge Analytica scandal.


No, it's not. They share the data indirectly by allowing companies to target individuals for advertising purposes based on that data. You search for shoes on Google and then ads about shoes follow you all over the web. So while you can't download users' posts like CA did in order to profile them for their political affiliation you can surely target them for whatever product you want to sell. If it was just about ads on Google everything would be hunky dory. But it's not. Just because they're nice and cool doesn't mean we have to give them a free pass to our personal lives.


Is that really any better? Google is so monolithic and all encompassing that data collected by their services can be shipped around internally instead of having to be sold to third parties.


> be shipped around internally

How is that a problem? The issue at hand is the irresponsible handling of data (especially wrt 3rd parties), not the general handling of 1st-party data competently within an internal network.

So yes, it's a LOT better.


In what way? If Google uses your search history to target you with ads is that somehow better than them leaking the data and a third party service targetting you the same way? The end result is the same.


How are they same? Single source you trust versus multiple unknown parties having your data.


> Single source you trust

Assuming that the single source is trustworthy, sure. But we're talking about the likes of Facebook and Google here.

The two use cases for data aren't identical, and actually shipping the raw data out is worse. But, in my opinion, the two things are similar and the shipping out of data is not that much worse.


Take trust out of the equation. It's one entity that is optimised to extract money from you and your data, vs many companies doing the same.


>They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

And that is something that is much more relevant to many users. I don't mind sharing a lot of my data as long as I know where my data actually ends up. If Google uses my data to improve their ad algorithm I'm fine with it, if my Facebook data ends up in the hand of some election manipulation company I'm not fine with it, no matter how much data it is.


> I know where my data actually ends up

And how do you know what Google does with it? AFAIK Google has never officially stated in specific detail what data they collect, what they do with it, who can access it, etc.

Their Privacy Policy gives them a giant escape hatch to essentially do anything with it -

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures. "

https://policies.google.com/privacy?hl=en-US#infosharing


I think you not quoting the rest of that sentence is quite disingenuous

>"...For example, we use service providers to help us with customer support"

As far as I'm aware there is no evidence that Google shares my personal information, without my explicit consent, with third parties like Cambridge Analytica, which collected tens of millions of individual user profiles.


Sorry, how is it disingenuous? I didn't consider the example relevant to the policy itself, and I provided a link to the source material for anyone to read. Giving a benign example is meant to downplay the fact that Google can do anything they want with your data.


Anything? How so? They are bound by their legal disclaimers and laws. Which prohibits many options automatically.


Anecdote time.

My wife and I typically donate to a few non profits, such as the ACLU and Trout Unlimited. They occasionally mail us, but we did give them our address so that’s ok.

But one day she donated to the environmental defense fund. Since then the number of surveys and donations requests from random non profits has exploded to 3-4 a week, including weird ones like evangelical surveys and pro-Israeli things. My wife is pissed at the EDF, and will never give them another dollar.

The point? We were both fine having the non-profits having our address and using it, but knowing that one of them sold that data really pissed her off.


> I don't mind sharing a lot of my data as long as I know where my data actually ends up.

But I do, and Google (and Facebook) suck up my data anyway, whether I use their services or not.

That's the real fundamental issue.


A bigger problem to me is Google's search bias and subtle manipulation. The same goes for Facebook's news curation algorithm. These things can directly impact our democracy, yet it's much harder to tackle or even investigate, because the whole thing is so elusive and subjective.

To me privacy seems to be already a lost cause. We've lost it and there's little hope to take it back. Also privacy violation is a relatively easy problem to understand. For bias and manipulation, however, we don't even know what to do.


Has Google ever disclosed exactly what data they collect, what they do with it, who can look at it, etc? We "know" that Google takes privacy "seriously", but that is a faith based position.


Actually, Google has. [0]

And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

And Google also gives you clear ways to delete this data, as referenced in that NYT article [2].

And moreover, Google has been consistently on track to store less private data. Example: location data is going to be auto-deleted for users that want that, as of this month[3]. Maps now gets an incognito mode[4].

>but that is a faith based position.

Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

(Disclaimer: I work for Google. The opinions expressed here are mine and not of my employer; etc - what I said is public knowledge.).

[0]https://policies.google.com/technologies/retention?hl=en-US

[1]https://www.nytimes.com/2019/04/13/technology/google-sensorv...

[2]https://support.google.com/accounts/answer/3118687?hl=en

[3]https://mashable.com/article/google-auto-delete-location-his...

[4]https://www.theverge.com/2019/5/7/18535657/google-incognito-...


> And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

How did you read that article and come away with the conclusion that Google has been "pretty transparent". The story was written after more than a year of other news outlets reporting on law enforcement using Google's location data to fish for suspects. Google has been providing this data for at least two years before the Times reported on it [0].

> And moreover, Google has been consistently on track to store less private data.

Such as credit card transaction data collected without most people's knowledge [1] or location data after you've explicitly told it not to [2]?

Technology companies need to understand that both words "informed consent" are important. We currently have very little in the way of choices when it comes to data collection. It is simply not possible to opt-out anymore without tremendous effort and personal cost. I like this quote from Maciej Ceglowski:

"A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus. Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society."

[0]: https://www.wral.com/Raleigh-police-search-google-location-h...

[1]: https://www.cnbc.com/2017/05/24/google-can-now-track-your-of...

[2]: https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb


All these links are year or two old.

A big push towards openness and privacy has happened over the last year.

On an individual level, I don't think it's hard to opt out of Google's tracking.

I won't argue with Maciej's quote, though, because, just like with automobiles, people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.

Ask people if they want to be tracked at all times, and they'll say "no".

Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.

Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.

In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].

So see, it's not the surveillance per se that people object to. It's implementation details. Welcome to Ceglowski's world.

[0]https://www.usatoday.com/story/news/2015/02/22/cellphone-911...

[1]https://money.cnn.com/2018/06/18/technology/apple-911-locati...


> All these links are year or two old.

Two of them are more than a year old, but the practices described in each are ongoing. The third, which describes Google's tracking of users after they've specifically opted not to be tracked is from nine months ago.

> A big push towards openness and privacy has happened over the last year.

After literally a decade of constructing what is very likely the largest database of personal information in the world. Since the late 2000s, when Google purchased DoubleClick, it has worked to collect information without the informed consent of its users. What fraction of your users know that Google purchases their credit card transaction histories?

What is the "big push"? The only things I can think of were the opt-in auto-deletion of a subset of data announced over the last week or two. All the user has to do is pay attention to the tech press, then remember to activate the feature when it launches at an unspecified future date!

What is this "openness"? Working on a censored search engine for China without informing their own head of security?

> ...people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.

Sure, they absolutely do. There can be significant utility gains from large collections of information. But much of the utility could be gained from information collected in a anonymity-protecting matter. In order to have traffic information, for example, Google doesn't need to continuously track your location history.

> Ask people if they want to be tracked at all times, and they'll say "no". Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.

And neither of these require surveillance. The phone could be located either by returning its location on command, or by uploading encrypted location data which only the user has the key to. Whatsapp, for example, shows that end-to-end encryption can be seamlessly integrated.

> Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again. > > In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].

Once again, this does not require ubiquitous surveillance, and it is misleading, at best, to imply that it does. Do you really not see the difference between location data provided to assist emergency response from a 911 caller and continuous location monitoring so that Google can serve more profitable ads?


Pre-Disclaimer: I don't mean to only pick on Google here, it applies to any company that collects such a vast amount of personal data on users. Also.. nothing personal :)

>Actually, Google has.

In extremely vague terms, yes. I want to see an itemized list.

For e.g. At company X, this is what we collect:

1) Your Name, age, location, DOB. 2) Your location is sent to COmpany X every 10 minutes 3) Your IP is tracked per-session 4) All this data is linked to your profile 5) Any thing you type in the search bar is sent to a company X server 6) After anonymizing (if we do it) this is what your data looks like 7) We never delete any of the above for the following reasons etc,etc,etc

>And moreover, Google has been consistently on track to store less private data.

The default should be zero/as little as possible collection of data. From what you've said it seems like people can opt-out of some data collection, but its vague as to the specific nature of what data is still being collected versus what isn't.

>Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

Unfortunately they don't. I won't dispute your second claim.


Far better than an itemized list, you can download all your data from Google

https://support.google.com/accounts/answer/3024190?hl=en

> The default should be zero/as little as possible collection of data.

Really? What about telemetry for self-driving cars? Is it immoral to develop a system that leads to less blunt trauma and death on roads? We (HN users, I don't work for any of these companies) can define your term "as little as possible" about like you seem to define parent's term "seriously". The point being that such adjectives are difficult to pin down but also difficult to avoid. Define "difficult" however you see fit.


> What about telemetry for self-driving cars?

They own the cars so they can track them all they want.

Tracking me all over the place after I click the "Do Not Track Me" button isn't acceptable.

> Is it immoral to develop a system that leads to less blunt trauma and death on roads?

It quite could be. Just as we humans decided to not use the scientific research generated by the Nazis on unwilling human subjects there are definite limits to what is acceptable even if the overall benefits are huge.


Collectively, we did no such thing. Many individual researchers and journals refused to use Nazi research, but many felt that it was unethical not to use it if it could save lives. In particular, I believe that the results of Nazi hypeothermia experiments were extensively used after the war. It's certainly not a cut-and-dry problem with an obvious ethical answer.


Facebook has their privacy policy too. So what? Even if all the listed policies are followed, even if they don't have loopholes (and they almost certainly do), Google still collects and retains metric fuckton of information that isn't necessary to provide the actual services it provides. The NYT article is great demonstration. And there is very little oversight around this.


It's all here, and you can delete it (including batch delete by period or source): https://myactivity.google.com/

This page includes other types of data (e.g videos you upload to youtube or mails in Gmail): https://policies.google.com/privacy


Thanks for linking to the policy document. They have this convenient line that allows them to do anything.

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures."

>It's all here, and you can delete it (including batch delete by period or source)

That scratches the surface, but an iceburg hides underneath. For one, how do we know its all the data? For another, there is no indication as to who has seen it or how Google uses it. That is my point. Google has never detailed those things..I suppose for legal reasons. A user has a right to know exactly what they are trading with Google in exchange for free services. They can then make up their own mind if they think its worth it. I'm just picking on Google here, because its a soft target, but it should apply to any service. We need new privacy regulations to formalize this.


Sounds like they just needed to spin up one "affiliate" and provide the data to that for data mining / etc purposes.

Anyone deleting the data "Google" holds would have zero effect on the affiliate, while giving some people the feeling Google was doing the right thing.


> It's all here, and you can delete it

So they claim, but I don't know why anyone should trust them about that.

Aside from that, though, what about the data collected from me? I have no Google account, but they're collecting data from me anyway. Same as Facebook.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

That sounds like a behavior of a honest service provider. Worse behavior would be if they'd help candidates which match their political biases, but work against candidates that disagree with them. That would look like abusing their position of a steward of a world-wide platform.

> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

Isn't political campaigning part of the discussion? If so, why is it bad to give candidates equal access to tools to perform this campaigning? Would you begrudge a printer that would print signs for any candidate, no matter who they are? If not, how electronic signs are functionally different from printed ones?

> everything is happening violently.

I feel people are really abusing the word "violently" nowdays. Nothing that happens on facebook is violence, it's mostly just talk.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

All this skepticism around Google's capacity to abuse their troves of data and invasive services is a clear indicator that this discussion has very little to do with real privacy. It is mostly a playground for various corporate, political and media shills.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

Well. How about the following facts?

-- Android market share is about 75 % of all smartphone users. Do non-smartphone users still exist?

-- It's probably safe to say that it's much much easier to avoid using Facebook's services consciously than to avoid using Google's services consciously?

-- Android's hard coded DNS server is a Google DNS server. The vast majority of people don't use a VPN (the only way in Android to change DNS server is through a VPN setup) to get around this. I haven't looked it up, but it's probably safe to assume that all Chromebooks also use Google's DNS by default? My limited networking knowledge tells me that this means that Google knows: who's using what Android/Google device at what time (match IP to Google login to Android device), who's visiting what website at what time, who's using what app at what time (requests to the app's server and to Google's servers for location, payment, auth and other info).

-- A lot of people use Gmail. Google can literally read all Gmail emails. Even those sent into Gmail from outside of Google servers.

-- The vast majority of Android users has enabled "Google location services" in Android. This is a one time "click OK to continue dialog" to permanently enable these location services until disabled manually again in settings (nobody does this). Weather apps, Tinder, Navigation apps, etc almost all require this. This means those people are continually sending < ~ 500 ms resolution data points of their location to Google servers. Even when those apps are turned "off" (meaning running in background, "off" doesn't exist in Android). Google can literally know how long you poop, who your secret girlfriend is and what specialist you visited at the hospital. NYT had a huge piece about this: https://www.nytimes.com/interactive/2018/12/10/business/loca...

-- People use Chrome. With automated Google login. Meaning Google knows everything they do online.

-- I'm not even going to go into other popular Google apps, we all know them: Youtube, Google Assistant, etc. With all the accounts nicely automatically linked to one another.

-- All of the above information and probably much more can be used for profiling information about users. I think it's safe to say that Google knows absolutely everything about it's users at this point?

Than the following:

-- Law enforcement (internationally?) can request any "sensor vault" data from any Google user in their country. AKA: they can request to look into all of the details of one's life.

-- The NSA can secretly request access to any of Google's data. Google is not allowed to disclose this access. By law. That's what they can legally do. Snowden has told us what they illegally do: anything they want. The NSA literally knows everything there is to know about everybody. In the world. And Google, by law, has to help them with that. In secret. This obviously has political consequences for the simple fact that "information = power". Those political consequences are not in favor of democracy.

Where I write "Google knows" I mean that Google servers receive that information (a fact). It is debatable whether Google stores that information or not. As for the location data mentioned above, it is a fact that third parties (app makers) do store this location information and sell it (illegally, see NYT link above). It is also debatable whether Google deletes your information when you ask them too.

My personal guess it that absolutely all information is stored, but that this storage of information is not disclosed to the public. I personally also believe that when you request your Google data to be deleted, only "the public facing layer" of your info gets deleted. I don't believe for a second they actually delete your "anonymized" data. In quotes, because such data can very easily be de-anonymized.

I mean, just the fact that there is deliberately no "DNS server address" setting in Android. Ask yourself why? Why would Google make it so much easier to just use the Google DNS server? Why does it offer a free DNS server to begin with? Why does all your location data have to go through Google servers before being consumed by the apps that run on your phone locally? That says it all to me.


Google makes a lot of genuinely useful products and services. We've all got to wrestle with the privacy tradeoffs of "free" maps, "free" email, "free" Android, etc. But at least the satisfaction of using well-built tools to accomplish more is enough of an offset to many people.

Facebook is much more likely to be seen as a guilty pleasure, or a marvelous time-waster, or something else that's a bit farther down the utility curve.


Perhaps in some countries/demos. I'd bet Instagram and WhatsApp for free, basic communication are seen as much higher utility in a significant amount of the global population.


But much of that utility comes from the fact that everyone uses it, rather than some inherent quality of the product, like is the case for Maps. Feeling somewhat "forced" to use it does not help with a positive view of the company.


> Instagram

Don't see its utility. It is a proper time waste.

> Whatsapp

It is indeed the most popular, but isn't unique in what it offers. Many others do what whatsapp does just as well, but none have the number of users.


Facebook makes products that many people find genuinely useful, it just makes fewer of them.


Google has your data and uses it for themselves. Facebook has your data and gives it (or leaks it) to anyone with money.


They both sell ads and offer advanced targeting options. Very similar businesses.


Facebook didn't get in hot water for selling ads.


Google and Facebook are exactly the same on this score. They both collect as much data about you as they can get their grubby paws on, they both use that data for themselves, and they both allow others to leverage that data in exchange for cash.


Has there been any evidence of abuse and misdirection on the part of Google at the same level as Facebook?

This is more than just negative press, this is a question of how data collection has been misused, and what lies executives have told about current and future plans surrounding privacy and data abuse.

But, personally, I'm staying away from FB, Google, Amazon, Snapchat, et al for the reasons you've mentioned; negative press or no, I cannot ethically work for companies that are haphazardly building the foundations of a potential technocratic dystopia in their chase for profitability.


I wonder how much of it is that Facebook isn't all that much fun anymore, and working there would provoke all sorts of "I don't use it anymore" comments from one's peers.


Or maybe prospects don't use Facebook either and it seems odd to contribute to a website you don't even visit.


Instagram and WhatsApp are more popular than ever, though.


If you have time, can you elaborate why you believe Google is a larger threat to personal liberty and privacy?


Google has far greater potential for invading an individual's privacy than Facebook does. Google has Android, Chrome, search, maps and gmail (and now photos). Those are all very critical pieces to a person's real world life. Facebook has FB, Instagram and WhatsApp. Yes, some private communication but it's limited to "social networking". Your taxes and utility bills don't get mailed to FB Messenger. You don't search for cancer research on Instagram and Facebook can't tell what other apps are installed on your phone.


I’ve been getting ads on Instagram for health issues I discussed on reddit.


I met a lady the other day who said that her job at google was a PM for their advanced technologies group doing “infiltration” into people’s lives.


well, it's the nature of the advertising business to defy privacy and liberty. competition occurs around how well you know consumers and how well you can manipulate those consumers into actions favorable to you (i.e., exerting power over you). further, online advertising is basically a duopoly of google and facebook, with google being twice as big as facebook and much more invasive.

google's, or more broadly, alphabet's, only competitive advantage is a thin lead on what might be called data intelligence (or surveillance, for the more cynical). they collect data across all internet ingresses/egresses, on not just those who opt-in, but even those who actively avoid google (through android, gmail, google apps, analytics, dns, internet access, etc.). and that data is super-valuable--alphabet had $30B in profits on $137B in revenue (an extraordinary margin).

to be clear, i'm not attempting to judge or disparage individual engineers at google. i'm sure most are mighty fine folks.

but for the foreseeable future, google really has no choice in the matter, not until it finds a different massive market from which to derive revenues. it's the nature of the business. and in the meantime, it's also under assault from intelligence, paramilitary, corporate, and governmental organizations from across the globe.

at least for americans, privacy and liberty are fundamental and inalienable rights. even though the consitution explicitly forbids only governmental interference in those rights, they apply more broadly to any entity, and particularly global corporations, attempting to exert power on individuals. and while inalienable, citizens still have a duty to be vigilant against such infringements.


I too was curious of this balance weighting. FB slurps in all of the data that users voluntarily post. Google just learns things through inference about users whereas FB is getting data posted directly by the user. Seems to me that FB is able to be way more invasive.


> FB slurps in all of the data that users voluntarily post.

That seems likely to be a grand understatement. FB has the opportunity to collect a great deal of data about their users beyond what they explicitly post -- for example, data about when and how they use Facebook mobile apps, how they interact with the Facebook web site, and what external web sites they visit which contain Facebook Like widgets.


Don't forget offline credit card transactions, FB and Google both.

https://www.bbc.com/news/technology-45368040


But Google does all of this as well with its api/fonts/analytics/etc being used.


On the other hand, FB is inherently social. I assume everything I give to FB has a chance of being public one day. I have some private conversations, but in the back of my head is that time the UI was deceiving and made seemingly direct messages public. FB is for sharing things. Google runs my phone, my work and personal email, my calendar, and more. I think they have a better attitude toward it, hence my willingness to trust them so far, but from a standpoint of ability to be invasive, Google blows everyone else out of the water on my devices.


Do you not consider Gmail data posted voluntarily by the user? How about search queries or calendar entries?


I can see your concern about messages via email, but I know for me personally, email is just not a thing anymore. Forgetting plain SPAM, corporations/marketing/etc have ruined email into this signal that has such a low S/N ratio that it's just not useful. What percentage of internet users actually use email for communication anymore? Sure, some, but it's not my largest attack vector (I consider Google/FB as attacking me).


Anything serious goes trough emails and this is the data I’d be most worried about leaking - anything from security related stuff like login/id confirmation to receipts, confirmations, sensitive data, professional communication.

Waaay more valuable than FB scraping my phonebook and photos


This may be true for personal communication but any sort of business deal is going to be happening over email. Mortgages, selling your company, large sales... All of the contracts are going to end up in your inbox.


> FB slurps in all of the data that users voluntarily post.

As well as all the data they can get, whether or not you even have a Facebook account. Real-world credit/debit card usage data, for instance.


Google collects more data, but they're much less free-wheeling with how they share it around. Pick your poison I guess.


I pick neither.

This whole debate about who is worse, Google or Facebook, is a bit ridiculous. They're both unacceptably awful, and practically speaking I don't think it matters which is more awful than the other.


> google is a bigger concern for privacy and personal liberty

I disagree. I think that on the whole, they're both about the same. But in terms of integrity, honesty, and ethics, Google has a (small) lead on Facebook.


Social networks live by the sword of voyeurism and die by the sword of voyeurism. This is not something Google has to worry about.


In my experience, Facebook used to be a cool thing to be on when you were documenting college party shenanigans and sharing pictures with friends, before it reached mass adoption to the point that your parents/grandparents were trying to add you as a friend. This was a time when organizing/sharing pictures with friends digitally was not a straightforward process.

I've come to terms with a simple fact of life that after graduating, it gets harder to make friends as you get older and start to settle down away from your college towns. Most of the acquaintances I've added on Facebook might as well not exist as we don't talk offline and my core circle of friends communicate over imessage/sms or various chat apps and we try to make time to see each other, further cementing our friendships offline.

Another thing that bothers me about Facebook since I first joined around the time a .edu ending email address was required (I think?), is that everytime I visit the site the new interface and feature bloat makes it feel less and less like what made it dead simple to connect with people back in earlier times. The current experience for me consists of a noisy ad infested newsfeed, ultra-optimized to inject itself straight into your brain's reward center with statistically significant A/B tested precision and autoplaying clickbait media nonsense, all while functioning as an echo-chamber for long-lost acquaintance's political outrage spam.

I wonder if people from my age cohort feel similar cognitive dissonance and that's why Facebook isn't even on their mind career wise, cause it's like an ancient digital museum that houses dusty pictures from their younger years and has long been replaced by Instagram.

Anyone out there relate?


> a simple fact of life that after graduating, it gets harder to make friends as you get older

This is not really a simple fact of life, in my opinion. It only gets harder because people make less of an effort. If you put as much time and energy into being social later in life as you do in college, then it isn't any harder to make new friends.

The main difference is that in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic -- you have to intentionally put yourself in such situation. Often this is done by joining and participating in clubs and organization that cover things you're interested in (dancing, crafting, whatever).


That's my point exactly, relatively speaking you'll never be surrounded by ~30,000 university students who are forced to cohabit the same location in their most formative years.

The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day that is then split between you and your life partner to afford to socialize regularly.

I've just internalized this phenomenon as a fact of life after entering mid adulthood and settling down.


Ah, I understand. I was reading more into your statement than I should have. I'm a 50-something man and I often hear others of my general age complain about how hard it is to make friends, but they rarely realize that's something they can actually fix.

> The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day

Indeed! That was what taught me the real reason to arrange "playdates". It's not really for the kids, it's so that the adults can socialize with less hassle around babysitters and such.

But having children certainly makes lots of things more difficult. Mine are adults now, and I can tell you from experience that once the kids are off to college and beyond, then your social life can come back in its entirety.


> in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic

That's exactly OP's point. Not automatic implies not as easy.


And, yet, here in the bay - my company (a startup) sent out two offers to candidates quite recently and they both went to FB instead.

There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.


It seems like your computer should consider remote workers. I live in Denver and have told Facebook recruiters that I’m specifically not interested in working at Facebook, but I would consider a remote position at a startup. I’m sure as hell not relocating to the Bay Area is all.


If we wanted remote workers then we'd hire people in Romania. Like the last place I worked at did.


Remote working in the same or similar timezones is nice, and no, not everyone needs to work in the same office.


Cool! I'm from Romania. What do you have on the menu on this fine evening? :)


If you think remote workers means a different timezone and a language barrier, you need to brush up on your knowledge.


Are you offering a competitive salary?


I mean... does any startup when compared to FAANG? Salaries are basically the same but the total compensation is, obviously, wildly different since expected value for startup stock is horrible.


Facebook is offering new CS grads from top schools $180k+ a year plus $30k+ signing bonus. That's cash. Most startups can't afford that unless they are very well funded.


Levels.fyi seems to disagree

See here : https://www.levels.fyi/salary/Facebook/SE/E3/

It is closer to a 155K + signing Bonus.


Hm, perhaps I'm thinking of a different company.


$180k salary? I know they offer some ridiculous numbers for top candidates but that seems high for any new grad. That'd push them near $250k with stock.


> I mean... does any startup when compared to FAANG?

Yes, in my experience. Unless, as you mentioned, you count stock options or similar (which I don't).


You should really count them. They're basically cash after you vest with a public company. Startups have a very low chance of that stock becoming worth anything even after it vests.

Comparing faang compensation at $300k tc vs $180k salary plus Monopoly money... Faang wins often enough since you never get enough stock in startups for it to be really worth it. (Short of being a founder)


> You should really count them.

I've been in the industry too long to put any real value on them, regardless of whether they're from a Fortune 50 company or a startup. Sure, sometimes they pay, but it's always a gamble. I sorta view them more like lottery tickets than actual compensation.


Dude, the stock you get as a Facebook employee is literally money. It vests every month, so you can go to a broker every month and sell it for thousands of dollars of hard cash. No waiting for IPO, no hoping the stock goes up, no 4 year vesting, no board approvals, no nothing.


It may be close, but it is not literally money. If it were, then why wouldn't they just pay the money rather than going through the hassle and expense (for both the company and the employee) of issuing stock?

But that's all beside the point. I understand why these sorts of things may be appealing to people. They just aren't to me, so they don't factor in as "compensation" when I'm evaluating a job opportunity.


> my company (a startup)

ad-tech startup?


Nah. FinTech.


> It's no wonder the substantial portion of people who care about their employer's ethics are turned off

Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.


It is not only ethnical.

With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.

Good people are weird, though. They work for money, like everyone else, but not just money.


> one of the things candidates respond to most is the story we tell about our company's past, present, and future

I hear this storyline fairly often (though exclusively from corporate recruiters) and I have a super hard time understanding why this would matter. Can someone who actually listens to this kind of (IMO) propaganda weigh in and help me understand why it matters to them?


It matters in terms of internal opportunities to advance. Say your one of the first data science or product people in a fast growing company, that's a lot of potential opportunity for someone ambitious and self driven.

It also matters in terms of how good will this look on my CV/the story I can tell later. I joined a now well established and fast growing tech company as employee no 267, we're now at ~1200. That looks great on my CV or in an interview where if I talk about scaling issues (both technical and cultural) they'll likely believe me.


It matters to the same degree (and for the same reasons) as any other marketing pitch matters.


I agree with everything you've said until the last part. Google is only marginally better than fb when it comes to some of these issues of privacy. The issue people have with facebook is that it has a reputation for being a pressure cooker.


In other words, Facebook now has no redeeming qualities.

I've gotten a bunch of pings from them over the last few months, and I just chuckle, say "hahahano", delete it and move on. I don't know if it's a coincidence that the pings happened after the scandal or if they have gotten into 'look under every rock' mode.


I think it is the latter. All the pings I see now are so mundane and banal (most of mine are friend suggestions for people I’ve never met). They really must be scraping the metaphorical bottom of the barrel.


> It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

Perhaps they were ashamed to admit it (?)


Anecdotal, but in the past year, I had tons of recruiters from Google/Amazon/etc. knocking on my LinkedIn box. However, not a single one from Facebook. Maybe they just simply didn’t fund recruiting efforts as much as the other tech companies or weren’t hiring as aggressively.


One of the (many) things that pleased me about deleting my LinkedIn account was that I no longer routinely heard from the likes of Google/Amazon/Facebook.


Plenty from FB here, to add another anecdote.


There was a time when recruiters would put on a sheepish and embarrassed-to-bring-it-up look when mentioning the higher paying jobs they had for tobacco companies. Paid more, but few wanted the social stigma, even if their personal ethics were OK with it.


IIRC, Google places a much higher emphasis on making counteroffers in the first place, as well as making those counteroffers hard to refuse.


Definitely not true.


This is not a surprising headline. If you have values about privacy, decency, civil discourse, honesty or integrity you wouldn’t want to work there. Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB. There’s a litany of abuses that they have either been horribly naive too or downright negligent in addressing.

If you are bright-eyed optimistic about Facebook I'd be interested to hear your counterpoint to all of the scandal. I don't think there is any company in the FAANG that is an altruistic enterprise but it isn't surprising that FB would have a decline in hiring.


> I don't think there is any company in the FAANG that is an altruistic enterprise

I feel like Google started that way, and then lost its way sometime between 2009-2012.

Projects like Google Scholar, Google Books, Google Summer of Code, Google Reader, Google Open Source, Google.org, and pulling out of China didn't really have much of a business justification, but were simply something good that they could do. Unfortunately they're a public company, and when you start struggling to meet analysts' (perpetually inflating) estimates, being good - or at least not evil - is usually the first thing on the chopping block.


Google never figured out how to make serious bank outside of the marketing department.

The fact that they kept the wheels on as long as they did, I gotta give them some respect for that. But they were always destined to end up being amoral at best and a cesspool at worst.

If you are starting a company and think you want to be proud of it for the rest of your life, sell a real product, not your users.


Not true: they make billions from cloud services.

Re: “you are the product” meme. I guess it’s a mechanism for raising awareness of privacy violation, but I really don’t like it. If you were literally the product, you would be a slave. You’re not. What they sell is your attention.

A big reason for not liking “you are the product” memes is it misses the key aspect of manipulation, which phrases like “the attention economy” capture. You are being manipulated into giving up more of your time and attention.


Honestly, that seems like splitting hairs to me.


If by "marketing department" you mean "advertising business" you would be correct. But I am skeptical you meant that.


Yeah I mean the advertising business. I was feeling a little salty.


I don't think that it has anything to do with altruism. Back then, it was not the right time to optimise for profit as new and exciting things were happening daily, it was time to explore not to exploit.

These days the exciting things are happening in other areas, so for the Internet giants, it's time to optimize for profit.


Also, as a smaller player, they stood more to benefit from open source projects (Android and Chrome) and open standards (the web and email). Now that they're on top, the most rational strategy is to secure their position by destroying the bridges they used to get there, locking down those open technologies.


In other words, the most rational strategy is to become evil.


Correct. Which is why pure capitalism is broken.


> it was time to explore not to exploit.

Is that not a pretty good definition of not being evil. IMO it is still the time to explore not exploit, even if that's not what they're doing.


> and then lost its way sometime between 2009-2012.

Tahrir Square was the high water mark of the old school techies. The failure of tech to effect real and lasting change really hasn't been understood by the techies, even still. That optimism about the future and tech's role in it, is gone.


Based on discussions I’ve had with Egyptians, Facebook was used to track down dissidents after the counter-revolution that brought Sisi into power. Not sure if it was Tahrir-era posts that got them into trouble, or criticism of the Sisi government.

The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs, and blinding Obama-era officials to the way these sites could be turned into tools of disinformation and repression.


"The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs"

When the media talked about the "Twitter revolution" I still remember thinking that there were people risking their lives on the streets and how ridiculous it was that some social media guys drinking lattes in their offices got the credit.


When you spend enough time in the future, you forget all the shitty things about the past that tech has changed and only notice the problems that stand out today. Not sure if you're specifically referencing Tahrir Square with your second sentence, but tech has definitely led to real, lasting and immensely positive change worldwide.


Oh yeah, but I think all the wind went out of the sails after Tahrir Square.

Before, there was such optimisim about tech. Nothing could stop it. Everything would be just better.

Look, the oppressed are rising up together! Look, medicine is getting better! Look, we're talking at each other, not shooting and hurting!

The arab spring was the high point, the proofed pudding.

After the failures there, sure, yes, tech has helped, has advanced the world. But that optimisim that was in Tahrir Square never came back. FB was a way to talk with each other and be a 3rd space, now it's a Skinner box. Wikipedia was the nascent Enclycopedia Galactica, now it's just mostly good and sometimes suspicious. Google wasn't evil, now it works with China to make Orwell sigh.

Things are chugging along, yes. But before people actually thought they could change the world for the better, now tech just has mortgages.


Google had been receiving shit from people for violating privacy since they had the novel idea to release a free email service that scanned your emails to deliver you targeted ads. The consequent centralization of email (ISP provided email pretty much died after) was subsequently used to allow the NSA to scan a huge amount of peoples personal information.

I think in the last few years is when things tipped to me distrusting Google more than the boogeyman of older times - Microsoft.


Forget the election, just what social networking is doing to young people's minds. They're making money by making a lot of people miserable - just not how I'd want to make a living.


Making people miserable and unable to understand the world outside these addictive platforms. I know so so many 20 somethings who genuinely don't understand the facade that is social media. They're giving up their youth in pursuit of a drug and they don't even realize it.


I have no money and a very shitty laptop, and thanks to Google Colab's free, hosted Jupyter Notebooks I'm having a blast learning Keras.

I'm not saying they're saints, but they've given me something free that's improved my life. Maybe it's ultimately greedy in the sense that later if I need a cloud platform I'll definitely use GCP. But I think that kind of mutualism is actually better in practice than altruism.


Also given the power and information that they have I think they've been fairly well behaved. I'm not sure what other commercial management I'd rather have have all my search and email info than the google guys. I guess a non profit might have advantages but then who'd pay for all the servers and the like?


> they've given me something free

No, they haven't. They're just making you pay with a different sort of currency.

I'm not saying that's good or bad, and I'm not saying that you aren't getting value for what you're paying. I'm just saying that the notion that these things are "free" is incorrect.


What is the currency?


Data about you and your use of your machines.


Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election. Now we have a global warning denier in the presidency and Amazon deforestation is reaching record levels.

Sure there isn't any company in the FAANG that is an altruistic enterprise, but to be only pure evil one is Facebook.

What really impresses me is that there's still a lot of talented people working there.


Any communication platform that is easy to use and easy to reach people on, and will therefore be popular, is great channel of dissemination of fake news. Well, guess what, it is also a great channel for communicating nonfake news, and talking to people that matter to you, and sharing your interests with likeminded people, and...

Blaming the platform for carrying fake news seems disingenuous. Fake news have been spreading over any available channels ever since humans learned to talk and figured out that they can tell lies to each other. Blame people for believing most of anything they're told.


I think one needs to be very intentionally oblivious to not notice the qualitative difference between fake news of the past and fake news right now.

Fake news in the past always had an identifiable source, because there was still an institution, a company, or someone with their name on the door between reader and publisher. As it stands, no such barrier exists any more. Things can be inserted by malicious actors into the debate, and they spread automatically simply because they have the tendency to 'go viral', something entirely absent in the past. That has added a completely new set of problems.

>Blame people for believing most of anything they're told

Precisely because it is very much in everyone's nature to suffer from these mechanisms it makes no sense to blame ' the people'. What does this imply, a great re-education of everyone? Obviously the only thing we can change is the companies, institutions and rules that determine how we consume the news, not how human brains disseminate them.


Yes, the Internet has made spreading fake news easier, but let me reiterate my argument - the Internet has made communication as a whole easier, so it's only natural that fake news spreads easier. However, that is not the fault of any given communication platform on the Internet, unless that platform happens to explicitly select fake news stories to spread, and suppress anything else. Which certainly is not case for WhatsApp.

Even in the past, good old rumor mills (I heard it from a friend of a friend of a cousin's barber) were reigning supreme in spreading bullshit around, by simple word of mouth. The Internet here is just a compounding factor to something that is very, very old and already very, very effective.

Edit: expanded first paragraph


>Yes, the Internet has made spreading fake news easier, but let me reiterate my argument - the Internet has made communication as a whole easier, so it's only natural that fake news spreads easier.

I totally agree, but it doesn't follow that this means that the fake news situation is acceptable, or that Facebook isn't responsible as a platform which the OP argued.

With the increased density of urban living came more opportunity but also more crime and disease, but we don't shrug and accept that barowners have no responsibility as platforms, we give them a set of safety and health regulations and responsibilities, and we equip the police with tools to combat crime.

So in that spirit, just as the internet isn't the internet of the hacker and small community age, companies should have to deal with the problems they produce. Just like everyone else always had to.


You're trying to cure the symptom, not the disease.

The disease is people being morons. If you want people to not be affected by fake news, making platforms censor people won't have any positive effect. You have to educate people.


No it didn't, that's historical revisionism. Go read any historical text or even Chinese censorship propaganda today and you'll discover the authorities were/are obsessed with "rumours". Attempts to control what people can say are a historical constant, the only difference between then and now is the lingo has changed.

There's no need to change anything, people or companies. Left to their own devices people figure out propaganda eventually. It may not be in the direction to your liking, but then, as a supposedly rational person, you must accept that maybe it's you who is victim of propaganda and the other people who are not.

The best example of this in recent times is the large number of supposedly smart people who fell in love with "The US President is a Russian spy" as an idea, which was based on nothing - it was rumours, it was fake news, it was propaganda distributed by the press, and now 50% or more of the US population agree with their president that it was also a witchhunt. Seems like people were drowned in fake news and still, a large chunk of them understood it was fake. Of those who still believe it, it might be more accurate to say they wish it was true - but that's a common theme in all rumours and propaganda throughout history.


> Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election.

How would you solve this problem?



I asked one, who pointed out he'd like to make change from within rather than blog about them being evil from the outside.

I don't know which way is right.


FB's core business model is the root problem. "Working from within" is vain, naive, and futile; only Zuckerberg has the power to change the business model, and we all know thats not happening.


Unpopular opinion:

I don't have a problem with business model (targeted ads) but I have a massive problem with lack of honesty, and this is the distinction between Facebook and Google for me. Google tells you what they collect and gives you the controls to delete it. This is enough for me.

Facebook struggles to remember that I want my timeline kept private.

I also believe that Cambridge Analytica was no accident, FB knew what they were doing, and they decided to throw them under the bus when they changed the media turned on them.

Trust is hard to build up and can be shattered in a day.


>Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB

Come on. According to FB the IRA had 80000 posts over a two year period. In the same period there were 33 trillion FB posts. What moron still believes this garbage?

FB was hung out to dry by Congressional democrats too spineless to own up to their own pathetic failure to defeat Trump.


I don't think you can discount that a concerted effort to create viral content will spread much farther than arbitrary wall posts by individuals. There are statistical methods Facebook could use to figure out how much of an impact that they had and I have not seen any such analysis yet.


That's only one reason this whole thing is bullshit. The other is that the alleged content is just random gibberish with no obvious intent or means to subvert anything. It's only by assuming that every post had its maximum theoretical pernicious effect (and that a pernicious effect was the intent in the first place, which is just supposition) that this whole thing becomes meaningful.

It is the desire to make this assumption (that Russia subverted the campaign) that drives the conclusion more than anything else. None of which is to say FB is innocent of blame. But their crime is hooking up an ad network to the social network, not colluding with Russians.


Viral gibberish can still influence subconsciously. But I think FB gets more blame than they should out of all this, and agree the collective is trying to pin the blame for complex social trends on a singular actor. Domestically, the fault is on FB, internationally on Russia, clean and tidy right? Makes it seem like regulation will solve the perceived issues next time around, while the bulk of the real issues are overlooked or ignored. I like Martin Gurri's take on this right now.


Its also worth noting that when something goes viral, its often not contained on one social network, and it becomes impossible for the platform to measure its reach and impact.

How could twitter, for example, really measure the impact of something like that video of the Covington High School kids, which was amplified on twitter (shared by a fake account, IIRC), picked up by the media, and then talked about incessantly for weeks, all over the place?


How many ads were there? If trillions of messages are needed to influence behavior, then Facebook ads would have no value.


it's surprising because it's totally false. read the above comments.


I'm not sure how the journalist fact checked this, but in 2016 CMU sent 12 people to Facebook[1]. In 2018 CMU sent 27 people to Facebook[2].

[1] https://www.cmu.edu/career/documents/2016_one_pagers/scs/scs... [2] https://www.cmu.edu/career/documents/2018_one_pagers/scs/1-P...


Those numbers are almost perfectly inline with the growth of Facebook, 17k employees in 2016 to 36k in 2018.

https://www.statista.com/statistics/273563/number-of-faceboo...


Or 29 if you include WhatsApp.


Those are just SCS numbers. The CNBC article cites all of CMU. Facebook recruits from the math, engineering (EE), and info systems programs as well.



Huh, those numbers are lower than I expected. Perhaps CNBC counted grad students as well, which is half the CMU population I think.


As much as everyone wants to believe this is because all the applicants are suddenly taking strong ethical stances, I bet it has more to do with Facebook simply not being considering cool or exciting anymore.


Sure, but one of the biggest reasons it isn't considered cool or exiting anymore is all the negative press.


Really? Obvious data privacy issues finally becoming mainstream is what is finally convincing programmers to not want to work there?

Hasn’t all of this stuff been obvious forever to programmers?


It's possible now, that this information has gone mainstream, programmers worry about how their non-tech friends view them for working there.


> Hasn’t all of this stuff been obvious forever to programmers?

Yes, but it wasn't at the "oh crap elections were manipulated, democracies toppled, dissidents tracked down, and genocides enabled" level.

The fact that Apple, the world's richest company in the world, now has a mainstream marketing campaign around privacy tells you it is now officially mainstream mainstream, not just programmer mainstream.


Yes, I think we're finally getting to the state of engineering and medicine in the 1800s, where bridges and buildings were collapsing, snake oil salesmen and physicians were indistinguishable to the layman, etc. Enough catastrophe will eventually motivate society to regulate the upstarts.


None of those things have happened though, have they? Which democracies has Facebook toppled? Which genocides did they "enable"? And as for "elections were manipulated", I'll give you that one, but the only actual evidence I've seen of Facebook manipulating elections is shutting down the pages and followers of actual conservative political parties. The whole Russia story turned out to be smoke and mirrors, and based on rather huge assumptions about the efficiency of political advertising to begin with.

Apple use privacy as an attempt to differentiate themselves from Google despite in the phone market having very little actual differences between them. It doesn't seem to have helped: Android is globally dominant.


> Which genocides did they "enable"?

Mynamar

> It doesn't seem to have helped

yet. Also Apple's goal with the iPhone isn't global dominance.


> Hasn’t all of this stuff been obvious forever to programmers?

I don't think so, considering all the devs who called others "paranoid" before for raising these issues.


I think this story is submarine PR paid for by Facebook to garner sympathy.


Agreed. I would argue that Facebook is not considered cool as a direct result of all the outrage surrounding it.


Facebook was uncool before the outrage really took off. It's bloated and fewer young people from each cohort take to it each year.


Its ML research is exciting. I would like to work with Yann Lecun


And the root cause of its suddenly "not being considering cool or exciting anymore" would be?


... not really innovating? its main product is still centered in social signaling and gossip ... just like day 1. Also the social craze is not so crazy anymore (I wonder, how are the other social apps doing?)


Limited upside for the stock?


Yeah, it's seen as the platform your parents (or worse, grandparents) use. Pretty much a step above Next Door. Why would you want to work for that over some of the other companies out there?


Facebook is also instagram and whatsapp, two platforms used by young people


My teenagers have pretty much moved on from those.


> My teenagers have pretty much moved on from those.

Out of curiosity what have they moved on to?


TikTok is pretty popular these days, for one.


Honestly they don't want to tell me :-) They really, really don't want their parents on their apps.


I've known several people that would no longer work for Facebook, but the Cambridge Analytica isn't the biggest concern. It's the fact that they are censoring people, even within private groups.

I have a friend that jokingly said (in a private group) that men are vile pigs. We knew she was joking - it was good natured. Yet, Facebook issued her a warning and removed her post and threatened her with a ban. First they came for Alex Jones and I said nothing because I don't like Alex Jones (and think he's insane), but now that the precedent is set that Facebook is the speech police, it will expand to us all (especially with their machine learning advancements that are here and yet to come).

The EFF has a really important article about this that I implore everyone to read[1].

[1] https://www.eff.org/deeplinks/2018/01/private-censorship-not...


For a detailed nuanced piece about how FB handles some of this complexity check this out: https://www.vanityfair.com/news/2019/02/men-are-scum-inside-...

FB has its problems, but I generally find the negative press overstated and wonder if Zuck's approach to interact with the press and congress actually backfires (compare to the other companies which largely ignore them). I appreciate how often he talks to the press to explain what they're trying to do though.

I also see the Cambridge Analytica scandal as what it is - permissive APIs that were abused and then locked down. Cambridge Analytica is to blame in this for abusing TOS and behaving badly, FB is arguably negligent - but I think the reaction is extreme.

Plus from people I know inside FB there really is a huge funded effort to stop abuse and manipulation via 'integrity' teams. It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.


>For a detailed nuanced piece [...]

I don't see any nuance, I see invented complexity masquerading as something sophisticated.

This article is written as if they're trying to suss out the perfect boundaries along which to apply censorship. That frame of thinking is a con. They will never have a perfect censorship implementation because there is no win state.

Let me back up: As you read this, facebook has a censorship policy. What would it take for facebook to be know they're done arguing over what does or doesn't get deleted? How do they know when the censorship policy isn't good enough? Advertiser pressure, press pressure and political pressure. i.e. fashion.

It should have been obvious to everyone present at (or read about) the meeting in this article where Facebook attempted to invent a principled stance that allows casual misandry while banning similarly-tempered casual misogyny. But I'm perfectly willing to believe Facebook can't see it. I can explain.

Facebook's stance towards nudity has a very clever property (that is almost certainly unintentional). It allows users to lie to themselves. Facebook could automatically hide nudity to everyone who isn't an opted-in adult, and still throw it behind a twitter-style click gate so there is no accidental NSFW at W. But they don't. They'd have to add an "I'm not a prude" checkbox. And THERE'S the rub. The lack of such an option lets people uncomfortable with nudity tell themselves that they're not the kind of person that's uncomfortable with nudity. The want to think they're sex-positive enough to have nudity, and just a reasonable person who doesn't mind if it happens to be banned. Even more importantly, it lets people them avoid thinking about the question "am I so uncomfortable with nudity that the idea of other people - the WRONG PEOPLE - seeing it makes me uncomfortable?"


They're not a government and it's reasonable for them to have some editorial control over what's posted to their platform (in the interest of keeping it a place their users want to be).

A lot of this is detailed in the article, it's helpful to read it.


> They're not a government

I'm not saying they are.

>it's reasonable for them to have some editorial control over what's posted to their platform

I'm not saying it's not.

I'm saying that their editorial policy is (in part) driven by fashion instead of principle. And complexity is used to obscure that rather than reveal it. My attempt to use Occam's razor to explain the obfuscation leads me to conclude there must be some utility that caused the system to evolve in such a way that it's possible to avoid seeing/acknowledging that.


> if Zuck's approach to interact with the press and congress actually backfires

Since when Zuckerberg speaks, he always seems to be equivocating if not outright lying, and his historical responses when Facebook has been called out for being abusive in the past has always been lots of promises with no real changes, I suspect so.

> It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.

That entire effort is Facebook's attempt to change the topic to something that is less threatening to them. The real issue is the privacy invasions from Facebook itself. Everything about their "pivot towards privacy" is premised on the privacy threat being actors external from Facebook.


It’s really easy to win the crowd by calling everyone else biased. When FB is one of the biggest lobbyists in Washington I highly doubt the press has ever been critical enough.

They got by on being the new darling child startup and built up an gargantuan pile of moral debt which they are now fairly paying for.


Wow that article was incredibly relevant and super interesting. Thanks so much for the link!


How did FB find out about her post?


Privacy in any unencrypted cloud service is an illusion.


What if someone in the group reported it ?


I suppose it's possible but unlikely. I'd be shocked if that were true.


I have no idea. I would guess they have an algorithm that checks every post and makes determinations.

I would have thought a private group would be immune, but apparently not.


Recently I think the scandals haven't been the single biggest factor when deciding between Facebook and other firms.

The common reason I heard from most of my friends who turned down FB, or quitted FB was that the working culture is too demanding and kind of pressure. Google on the other hand is more laid back and family friendly. So people who started building a family will prefer Google over FB. The nice thing is FB tends to offer higher level than Google, so in some cases, if you get matched, it works out pretty well.

I have a friend who worked at FB, after he came back from paternity leave, his manager told him he has been slacking (his reviews were always "meet all"/"exceeding" before), it's time to put in more work, he quitted after a month.


Honestly I think its more than just that, Facebook is no longer the cool start up building the world's favorite website. They're a multi-national advertising mega corp, and TBH most people just don't want to work there.


But Apple, Google, Microsoft aren't the cool start-ups either and they are mega-corps, yet people still want to work there. So I don't believe that line of reasoning holds up.


In terms of cool tech, Apple/Google absolutely are a cut above Facebook now in my experience. Microsoft used to be below, now they rose as they embraced more open source and modern tech, I'd probably put Facebook/Microsoft/Amazon even in the cool category these days while Facebook was stereotypically above Microsoft before. Amazon, depends who you ask for a definition of "cool" from more so.


I would wager that at all of these companies, how “cool” your job is depends heavily on your specific team and role. I’m sure there are thousands of people with very uncool jobs at Google, and thousands of people with very cool jobs at FB. Comparing two companies of that size and saying one is cooler than the other is probably an oversimplification.

That being said, I share your perception that Google and Apple are cooler than FB. :)


I think your wager is completely accurate. With Amazon as an example, depending on your team you could find yourself doing high-performance work in cutting edge languages on a globally distributed high-availability FaaS platform, or you could be using Perl 5.8 to make tweaks to a website that will only be displayed in certain parts of India.


You may have forgotten that Facebook owns oculus and is doing some pretty cool things in the AR/VR space


I am sure that people are still super psyched to work at FAIR and Oculus, but those teams are quite small and AFAIK don't hire using the standard FB SDE hiring process.


At least to me, Microsoft's desirability has gone up considerably over the last decade.


Totally agree. There are dozens of companies that aggregate user information (Google amongst them). Like @bognition said, FB lost its cool a while ago.


Is collecting user information itself considered unethical? Isn't the unethical part abusing sharing of that information?

No one rallies against the US Census as unethical


> Is collecting user information itself considered unethical?

If it's being done without the informed consent of the user whose data is being collected, then yes.


A long while ago. In Australia I'd say that 2009 was probably the end of FB being perceived as (at least somewhat) cool. 2008/2009 was when I remember being encouraged to join up by peers. My joke way of deflecting that (because I've never wanted to join) was to say that I'd applied but been rejected for not being cool enough. This reply would often leave people non-plussed - I was ironically mocking the idea of FB being cool (at the time) but I don't think most people got the joke.


Since 2014 yes


Is anyone?


I can only think of SpaceX, but they're not a website.


Yes, but you haven't heard of them yet.


They're not "the cool startup building the world's favorite website" until we've heard of them.


My good dude, once you've heard of them, they are not cool anymore. Cool is exclusive. Cool is mysterious. Cool doesn't care what is popular. When cool goes mainstream, it's not cool anymore! The trendsetters have already moved on.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: