> Usually half of the close is done for recruiters with the brand Facebook has
I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."
It's no wonder the substantial portion of people who care about their employer's ethics are turned off.
This doesn't sound right. Can you add some citation or detail?
And the smaller community banks have less onerous regulations than the big ones. Banks have to be much more capitalized and have less leverage because of Dodd-Frank. That's why the return on equity is mediocre.
This is historically high.
The person who runs Definers Public Affairs, Matt Rhoades was the opposition research director for Bush/Cheney 04.
> In 2004, he played a critical role in President George W. Bush’s winning re-election campaign, serving as the Research Director for Bush-Cheney ’04, where he helped develop the campaign’s opposition research, message development and rapid response operations.
The oppo research director of the campaign can have no official links to the 527 group, of course. But the group exists for one reason only - to discredit political opponents.
intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.
Big claim. Any proofs?
With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
Extremely worrisome if you prefer to have people elected through a democratic process that is based on the discussion. This is not like understanding what people want or how do they think through big data analysis but manufacturing it.
Sure - provocations and lies are not new, it was always the case for politics but with a social media everything is at scale and everything is happening violently.
(EU citizen here): I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process. If that's not feasible, then the second best option is, indeed, that they provide the same service to all candidates, no matter who those candidates are.
Am I getting it right that you'd prefer they pick some candidates to help in detriment of others? Because that option does not sound very healthy to me, personally.
Nobody is allowed to give you money or anything else of value (including free airtime) -- not individuals, not companies, just the FEC. Anything else is bribery, and a crime. That way, you're not going to do things just to please your benefactors and get you an edge over your opponent next time, as your war budget is already accounted for. Instead you can focus on doing what's best for the people.
Elected can propose and pass but sortition house can block.
Basically the commons/lords setup but with the lord's replaced with random people.
That way the politicians answer directly to the people (or a random sample) of them.
I been musing on whether there should be some small barriers to joining, as there is in jury service - perhaps the elected get to oppose a certain number of candidates, or they must pass a civics/governance test first so at least they've some technical knowledge going in. I can see that being twisted into something bad though.
Much as I dislike the Lords Spiritual in the current system, I wonder if the sortition should embrace it and be a "tulip farm" of certain interest groups e.g. 20 each for religion, business, justice, commoners etc, as then there's a definite base of understanding in important areas.
However it would be arranged it'd be hard for it to be worse than having the Lords full of lords though.
There's no quid pro quo bribery, but if the NRA spends a bazillion dollars attacking your opponent but not you, it'd be hard to say there's no influence on your decision making process. At the same time, it's really tricky to ban. Is something like Michael Moore's Farenheit 9/11 a form of political advertising?
Not only that, is CNN or Fox News coverage of a sitting politician who is running for reelection a form of political advertising? How they choose to report stories -- and which stories they choose to report -- can certainly affect how the voters view the candidates.
Probably the best solution is to just have a signature requirement where if you get that many signatures, the government gives your campaign an amount of money equal to the average amount of private money raised by successful candidates running for the same level of office in the previous election.
Then the average privately-funded campaign will have twice that much (if they get the signatures too), but a factor of two isn't huge here. It's more of a threshold situation where once you reach a saturation point it's diminishing returns. Get the candidate to that point with public money and the value of trading legislation for private money would be much diminished.
Of course, you still have the problem that too many people vote for who cable news tells them to.
Campaigns do spend their own money but it is capped at some low value to even the playing field, and they also receive public funding. TV stations are required to give equal time to all candidates (in the 2007 election there were 12 candidates, only 3 of which had any realistic chance of winning, but major TV stations spent equal time interviewing all of them).
That sounds like a form of extreme boycott - and however I despise politicians in general, subjecting them to essentially expulsion from the society (at least temporarily until the election ends) and complete gag order and media blackout (because otherwise I could promote a candidate without giving them money directly - I would just publish ads under my own name but would be praising the candidate in them) just for wanting to be elected seems a bit extreme to me. Not to say at least in the US it's probably incompatible with at least half of the constitutional amendments.
Do you mean corporations being banned from providing services to any electoral campaign? Probably not, because in that case election campaign would be impossible. If so, then Facebook would be free to provide promotion services to any political campaign too - they are service provider as any other.
Would that be a bad thing?
I would love an election that was simply an announcement of the election date and a website to see the candidates and their politics. The only campaigning would then be the government campaigning to get people to vote.
If you want to have elections, yes. If you prefer hereditary monarchy, then you'd be fine.
Leaving Google, by contrast, is way more difficult, their ecosystem reaches literally every corner of the web and you have to deal with it even if you don't consciously use any Google product, for example if Recaptcha doesn't like you, everyday online tasks like paying public school fees  or signing up to an online forum become much harder. Another example is Amp, where the fact that you are reading an article hosted into Google infrastructure is often hidden from you, there are many more examples. Trying to quit Google feels like that episode of Black Mirror where that woman is ostracised by everyone because she doesn't have the same cybernetic implant that everyone else is using. Just because Google hasn't been caught in any scandal comparable to the Cambridge Analytica one, it doesn't mean that it's OK for them to have so much unchecked power.
 see my submission history for details
Google runs search and email for essentially the entire web, controls the market dominant browser and mobile OS, has tracking scripts on >75% of the top million websites and runs a fair amount of the internet's infrastructure. It is the senior partner in the online advertising duopoly (together with Facebook) and runs one of the three major cloud computing services. It has also become the de facto standards authority for the internet and runs a massive continuous operation to collect photos of every street on the planet, which it is now expanding into interior spaces. It sells always-on microphones for the home, as well as a line of internet-connected home appliances. It does so much invasive stuff that I've probably forgotten half of it here.
So it's neither a big or controversial claim in 2019 to point out that Google has unique breadth of visibility into both the physical world, and anything that touches a connected device.
Can anyone provide substantiation for that claim?
Not if any of your friends use Facebook.
Which suggest to me that either you're either exaggerating quite a bit - or you were using the term without quite knowing what it means. (Which something more specific than simply "a lot").
Anyway, a 100x tracking surface area is pretty accurate and not an exaggeration in the least; if anything it is too conservative of an estimate. Just Android, search and analytics on their own are easily 100x the tracking surface area of everything Facebook does, that's without considering:
gmail, home, docs, amp, drive, maps, hangouts, chrome, chrome os, messages, voice, ads, gcp, youtube, firebase, music, waze, play-store, places, wallet, domains, duo and so many more. I don't understand how this isn't completely obvious.
Because you're living in a bubble, and have grown accustomed to thinking that everyone else is using the same lenses to view the world as you are.
It was extremely hand-wavy, actually.
Whether you take that as an insult or not is up to you.
“includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”
US law enforcement had been regularly accessing Sensorvault user data in a dragnet-like fashion to obtain location details for hundreds or thousands of users
I have become more and more convinced that this is Facebook's real business model; that enabling instances of archetype CambridgeAnalytica is the purpose the company actually exists for.
They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.
This is the whole point though, is it not? As far as we know, Google treats the data they collect more thoughtfully and responsibly than Facebook. And so they are (rightly or not) viewed as less of a threat to the public good.
Of course, they could just be better at hiding their abuse of our data... But that's a conspiracy theory, not a matter of public record like the Cambridge Analytica scandal.
How is that a problem? The issue at hand is the irresponsible handling of data (especially wrt 3rd parties), not the general handling of 1st-party data competently within an internal network.
So yes, it's a LOT better.
Assuming that the single source is trustworthy, sure. But we're talking about the likes of Facebook and Google here.
The two use cases for data aren't identical, and actually shipping the raw data out is worse. But, in my opinion, the two things are similar and the shipping out of data is not that much worse.
And that is something that is much more relevant to many users. I don't mind sharing a lot of my data as long as I know where my data actually ends up. If Google uses my data to improve their ad algorithm I'm fine with it, if my Facebook data ends up in the hand of some election manipulation company I'm not fine with it, no matter how much data it is.
And how do you know what Google does with it? AFAIK Google has never officially stated in specific detail what data they collect, what they do with it, who can access it, etc.
>"...For example, we use service providers to help us with customer support"
As far as I'm aware there is no evidence that Google shares my personal information, without my explicit consent, with third parties like Cambridge Analytica, which collected tens of millions of individual user profiles.
My wife and I typically donate to a few non profits, such as the ACLU and Trout Unlimited. They occasionally mail us, but we did give them our address so that’s ok.
But one day she donated to the environmental defense fund. Since then the number of surveys and donations requests from random non profits has exploded to 3-4 a week, including weird ones like evangelical surveys and pro-Israeli things. My wife is pissed at the EDF, and will never give them another dollar.
The point? We were both fine having the non-profits having our address and using it, but knowing that one of them sold that data really pissed her off.
But I do, and Google (and Facebook) suck up my data anyway, whether I use their services or not.
That's the real fundamental issue.
To me privacy seems to be already a lost cause. We've lost it and there's little hope to take it back. Also privacy violation is a relatively easy problem to understand. For bias and manipulation, however, we don't even know what to do.
And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see .
And Google also gives you clear ways to delete this data, as referenced in that NYT article .
And moreover, Google has been consistently on track to store less private data. Example: location data is going to be auto-deleted for users that want that, as of this month. Maps now gets an incognito mode.
>but that is a faith based position.
Hope the links I referenced will help dispel this notion. Google does take privacy seriously.
(Disclaimer: I work for Google. The opinions expressed here are mine and not of my employer; etc - what I said is public knowledge.).
How did you read that article and come away with the conclusion that Google has been "pretty transparent". The story was written after more than a year of other news outlets reporting on law enforcement using Google's location data to fish for suspects. Google has been providing this data for at least two years before the Times reported on it .
> And moreover, Google has been consistently on track to store less private data.
Such as credit card transaction data collected without most people's knowledge  or location data after you've explicitly told it not to ?
Technology companies need to understand that both words "informed consent" are important. We currently have very little in the way of choices when it comes to data collection. It is simply not possible to opt-out anymore without tremendous effort and personal cost. I like this quote from Maciej Ceglowski:
"A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus. Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society."
A big push towards openness and privacy has happened over the last year.
On an individual level, I don't think it's hard to opt out of Google's tracking.
I won't argue with Maciej's quote, though, because, just like with automobiles, people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.
Ask people if they want to be tracked at all times, and they'll say "no".
Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.
Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.
In the latter case, lack of "surveillance" is seen as a "tragic shortfall" , and adding it is a "feature".
So see, it's not the surveillance per se that people object to. It's implementation details. Welcome to Ceglowski's world.
Two of them are more than a year old, but the practices described in each are ongoing. The third, which describes Google's tracking of users after they've specifically opted not to be tracked is from nine months ago.
> A big push towards openness and privacy has happened over the last year.
After literally a decade of constructing what is very likely the largest database of personal information in the world. Since the late 2000s, when Google purchased DoubleClick, it has worked to collect information without the informed consent of its users. What fraction of your users know that Google purchases their credit card transaction histories?
What is the "big push"? The only things I can think of were the opt-in auto-deletion of a subset of data announced over the last week or two. All the user has to do is pay attention to the tech press, then remember to activate the feature when it launches at an unspecified future date!
What is this "openness"? Working on a censored search engine for China without informing their own head of security?
> ...people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.
Sure, they absolutely do. There can be significant utility gains from large collections of information. But much of the utility could be gained from information collected in a anonymity-protecting matter. In order to have traffic information, for example, Google doesn't need to continuously track your location history.
> Ask people if they want to be tracked at all times, and they'll say "no". Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.
And neither of these require surveillance. The phone could be located either by returning its location on command, or by uploading encrypted location data which only the user has the key to. Whatsapp, for example, shows that end-to-end encryption can be seamlessly integrated.
> Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.
> In the latter case, lack of "surveillance" is seen as a "tragic shortfall" , and adding it is a "feature".
Once again, this does not require ubiquitous surveillance, and it is misleading, at best, to imply that it does. Do you really not see the difference between location data provided to assist emergency response from a 911 caller and continuous location monitoring so that Google can serve more profitable ads?
>Actually, Google has.
In extremely vague terms, yes. I want to see an itemized list.
For e.g. At company X, this is what we collect:
1) Your Name, age, location, DOB.
2) Your location is sent to COmpany X every 10 minutes
3) Your IP is tracked per-session
4) All this data is linked to your profile
5) Any thing you type in the search bar is sent to a company X server
6) After anonymizing (if we do it) this is what your data looks like
7) We never delete any of the above for the following reasons
>And moreover, Google has been consistently on track to store less private data.
The default should be zero/as little as possible collection of data. From what you've said it seems like people can opt-out of some data collection, but its vague as to the specific nature of what data is still being collected versus what isn't.
>Hope the links I referenced will help dispel this notion. Google does take privacy seriously.
Unfortunately they don't. I won't dispute your second claim.
> The default should be zero/as little as possible collection of data.
Really? What about telemetry for self-driving cars? Is it immoral to develop a system that leads to less blunt trauma and death on roads? We (HN users, I don't work for any of these companies) can define your term "as little as possible" about like you seem to define parent's term "seriously". The point being that such adjectives are difficult to pin down but also difficult to avoid. Define "difficult" however you see fit.
They own the cars so they can track them all they want.
Tracking me all over the place after I click the "Do Not Track Me" button isn't acceptable.
> Is it immoral to develop a system that leads to less blunt trauma and death on roads?
It quite could be. Just as we humans decided to not use the scientific research generated by the Nazis on unwilling human subjects there are definite limits to what is acceptable even if the overall benefits are huge.
This page includes other types of data (e.g videos you upload to youtube or mails in Gmail):
>It's all here, and you can delete it (including batch delete by period or source)
That scratches the surface, but an iceburg hides underneath. For one, how do we know its all the data? For another, there is no indication as to who has seen it or how Google uses it. That is my point. Google has never detailed those things..I suppose for legal reasons. A user has a right to know exactly what they are trading with Google in exchange for free services. They can then make up their own mind if they think its worth it. I'm just picking on Google here, because its a soft target, but it should apply to any service. We need new privacy regulations to formalize this.
Anyone deleting the data "Google" holds would have zero effect on the affiliate, while giving some people the feeling Google was doing the right thing.
So they claim, but I don't know why anyone should trust them about that.
Aside from that, though, what about the data collected from me? I have no Google account, but they're collecting data from me anyway. Same as Facebook.
That sounds like a behavior of a honest service provider. Worse behavior would be if they'd help candidates which match their political biases, but work against candidates that disagree with them. That would look like abusing their position of a steward of a world-wide platform.
> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
Isn't political campaigning part of the discussion? If so, why is it bad to give candidates equal access to tools to perform this campaigning? Would you begrudge a printer that would print signs for any candidate, no matter who they are? If not, how electronic signs are functionally different from printed ones?
> everything is happening violently.
I feel people are really abusing the word "violently" nowdays. Nothing that happens on facebook is violence, it's mostly just talk.
>Big claim. Any proofs?
All this skepticism around Google's capacity to abuse their troves of data and invasive services is a clear indicator that this discussion has very little to do with real privacy. It is mostly a playground for various corporate, political and media shills.
Well. How about the following facts?
-- Android market share is about 75 % of all smartphone users. Do non-smartphone users still exist?
-- It's probably safe to say that it's much much easier to avoid using Facebook's services consciously than to avoid using Google's services consciously?
-- Android's hard coded DNS server is a Google DNS server. The vast majority of people don't use a VPN (the only way in Android to change DNS server is through a VPN setup) to get around this. I haven't looked it up, but it's probably safe to assume that all Chromebooks also use Google's DNS by default? My limited networking knowledge tells me that this means that Google knows: who's using what Android/Google device at what time (match IP to Google login to Android device), who's visiting what website at what time, who's using what app at what time (requests to the app's server and to Google's servers for location, payment, auth and other info).
-- A lot of people use Gmail. Google can literally read all Gmail emails. Even those sent into Gmail from outside of Google servers.
-- The vast majority of Android users has enabled "Google location services" in Android. This is a one time "click OK to continue dialog" to permanently enable these location services until disabled manually again in settings (nobody does this). Weather apps, Tinder, Navigation apps, etc almost all require this. This means those people are continually sending < ~ 500 ms resolution data points of their location to Google servers. Even when those apps are turned "off" (meaning running in background, "off" doesn't exist in Android). Google can literally know how long you poop, who your secret girlfriend is and what specialist you visited at the hospital. NYT had a huge piece about this: https://www.nytimes.com/interactive/2018/12/10/business/loca...
-- People use Chrome. With automated Google login. Meaning Google knows everything they do online.
-- I'm not even going to go into other popular Google apps, we all know them: Youtube, Google Assistant, etc. With all the accounts nicely automatically linked to one another.
-- All of the above information and probably much more can be used for profiling information about users. I think it's safe to say that Google knows absolutely everything about it's users at this point?
Than the following:
-- Law enforcement (internationally?) can request any "sensor vault" data from any Google user in their country. AKA: they can request to look into all of the details of one's life.
-- The NSA can secretly request access to any of Google's data. Google is not allowed to disclose this access. By law. That's what they can legally do. Snowden has told us what they illegally do: anything they want. The NSA literally knows everything there is to know about everybody. In the world. And Google, by law, has to help them with that. In secret. This obviously has political consequences for the simple fact that "information = power". Those political consequences are not in favor of democracy.
Where I write "Google knows" I mean that Google servers receive that information (a fact). It is debatable whether Google stores that information or not. As for the location data mentioned above, it is a fact that third parties (app makers) do store this location information and sell it (illegally, see NYT link above). It is also debatable whether Google deletes your information when you ask them too.
My personal guess it that absolutely all information is stored, but that this storage of information is not disclosed to the public. I personally also believe that when you request your Google data to be deleted, only "the public facing layer" of your info gets deleted. I don't believe for a second they actually delete your "anonymized" data. In quotes, because such data can very easily be de-anonymized.
I mean, just the fact that there is deliberately no "DNS server address" setting in Android. Ask yourself why? Why would Google make it so much easier to just use the Google DNS server? Why does it offer a free DNS server to begin with? Why does all your location data have to go through Google servers before being consumed by the apps that run on your phone locally? That says it all to me.
Facebook is much more likely to be seen as a guilty pleasure, or a marvelous time-waster, or something else that's a bit farther down the utility curve.
Don't see its utility. It is a proper time waste.
It is indeed the most popular, but isn't unique in what it offers. Many others do what whatsapp does just as well, but none have the number of users.
This is more than just negative press, this is a question of how data collection has been misused, and what lies executives have told about current and future plans surrounding privacy and data abuse.
But, personally, I'm staying away from FB, Google, Amazon, Snapchat, et al for the reasons you've mentioned; negative press or no, I cannot ethically work for companies that are haphazardly building the foundations of a potential technocratic dystopia in their chase for profitability.
google's, or more broadly, alphabet's, only competitive advantage is a thin lead on what might be called data intelligence (or surveillance, for the more cynical). they collect data across all internet ingresses/egresses, on not just those who opt-in, but even those who actively avoid google (through android, gmail, google apps, analytics, dns, internet access, etc.). and that data is super-valuable--alphabet had $30B in profits on $137B in revenue (an extraordinary margin).
to be clear, i'm not attempting to judge or disparage individual engineers at google. i'm sure most are mighty fine folks.
but for the foreseeable future, google really has no choice in the matter, not until it finds a different massive market from which to derive revenues. it's the nature of the business. and in the meantime, it's also under assault from intelligence, paramilitary, corporate, and governmental organizations from across the globe.
at least for americans, privacy and liberty are fundamental and inalienable rights. even though the consitution explicitly forbids only governmental interference in those rights, they apply more broadly to any entity, and particularly global corporations, attempting to exert power on individuals. and while inalienable, citizens still have a duty to be vigilant against such infringements.
That seems likely to be a grand understatement. FB has the opportunity to collect a great deal of data about their users beyond what they explicitly post -- for example, data about when and how they use Facebook mobile apps, how they interact with the Facebook web site, and what external web sites they visit which contain Facebook Like widgets.
Waaay more valuable than FB scraping my phonebook and photos
As well as all the data they can get, whether or not you even have a Facebook account. Real-world credit/debit card usage data, for instance.
This whole debate about who is worse, Google or Facebook, is a bit ridiculous. They're both unacceptably awful, and practically speaking I don't think it matters which is more awful than the other.
I disagree. I think that on the whole, they're both about the same. But in terms of integrity, honesty, and ethics, Google has a (small) lead on Facebook.
I've come to terms with a simple fact of life that after graduating, it gets harder to make friends as you get older and start to settle down away from your college towns. Most of the acquaintances I've added on Facebook might as well not exist as we don't talk offline and my core circle of friends communicate over imessage/sms or various chat apps and we try to make time to see each other, further cementing our friendships offline.
Another thing that bothers me about Facebook since I first joined around the time a .edu ending email address was required (I think?), is that everytime I visit the site the new interface and feature bloat makes it feel less and less like what made it dead simple to connect with people back in earlier times. The current experience for me consists of a noisy ad infested newsfeed, ultra-optimized to inject itself straight into your brain's reward center with statistically significant A/B tested precision and autoplaying clickbait media nonsense, all while functioning as an echo-chamber for long-lost acquaintance's political outrage spam.
I wonder if people from my age cohort feel similar cognitive dissonance and that's why Facebook isn't even on their mind career wise, cause it's like an ancient digital museum that houses dusty pictures from their younger years and has long been replaced by Instagram.
Anyone out there relate?
This is not really a simple fact of life, in my opinion. It only gets harder because people make less of an effort. If you put as much time and energy into being social later in life as you do in college, then it isn't any harder to make new friends.
The main difference is that in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic -- you have to intentionally put yourself in such situation. Often this is done by joining and participating in clubs and organization that cover things you're interested in (dancing, crafting, whatever).
The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day that is then split between you and your life partner to afford to socialize regularly.
I've just internalized this phenomenon as a fact of life after entering mid adulthood and settling down.
> The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day
Indeed! That was what taught me the real reason to arrange "playdates". It's not really for the kids, it's so that the adults can socialize with less hassle around babysitters and such.
But having children certainly makes lots of things more difficult. Mine are adults now, and I can tell you from experience that once the kids are off to college and beyond, then your social life can come back in its entirety.
That's exactly OP's point. Not automatic implies not as easy.
There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.
See here : https://www.levels.fyi/salary/Facebook/SE/E3/
It is closer to a 155K + signing Bonus.
Yes, in my experience. Unless, as you mentioned, you count stock options or similar (which I don't).
Comparing faang compensation at $300k tc vs $180k salary plus Monopoly money... Faang wins often enough since you never get enough stock in startups for it to be really worth it. (Short of being a founder)
I've been in the industry too long to put any real value on them, regardless of whether they're from a Fortune 50 company or a startup. Sure, sometimes they pay, but it's always a gamble. I sorta view them more like lottery tickets than actual compensation.
But that's all beside the point. I understand why these sorts of things may be appealing to people. They just aren't to me, so they don't factor in as "compensation" when I'm evaluating a job opportunity.
Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.
With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.
Good people are weird, though. They work for money, like everyone else, but not just money.
I hear this storyline fairly often (though exclusively from corporate recruiters) and I have a super hard time understanding why this would matter. Can someone who actually listens to this kind of (IMO) propaganda weigh in and help me understand why it matters to them?
It also matters in terms of how good will this look on my CV/the story I can tell later. I joined a now well established and fast growing tech company as employee no 267, we're now at ~1200. That looks great on my CV or in an interview where if I talk about scaling issues (both technical and cultural) they'll likely believe me.
I've gotten a bunch of pings from them over the last few months, and I just chuckle, say "hahahano", delete it and move on. I don't know if it's a coincidence that the pings happened after the scandal or if they have gotten into 'look under every rock' mode.
Perhaps they were ashamed to admit it (?)
If you are bright-eyed optimistic about Facebook I'd be interested to hear your counterpoint to all of the scandal. I don't think there is any company in the FAANG that is an altruistic enterprise but it isn't surprising that FB would have a decline in hiring.
I feel like Google started that way, and then lost its way sometime between 2009-2012.
Projects like Google Scholar, Google Books, Google Summer of Code, Google Reader, Google Open Source, Google.org, and pulling out of China didn't really have much of a business justification, but were simply something good that they could do. Unfortunately they're a public company, and when you start struggling to meet analysts' (perpetually inflating) estimates, being good - or at least not evil - is usually the first thing on the chopping block.
The fact that they kept the wheels on as long as they did, I gotta give them some respect for that. But they were always destined to end up being amoral at best and a cesspool at worst.
If you are starting a company and think you want to be proud of it for the rest of your life, sell a real product, not your users.
Re: “you are the product” meme. I guess it’s a mechanism for raising awareness of privacy violation, but I really don’t like it. If you were literally the product, you would be a slave. You’re not. What they sell is your attention.
A big reason for not liking “you are the product” memes is it misses the key aspect of manipulation, which phrases like “the attention economy” capture. You are being manipulated into giving up more of your time and attention.
These days the exciting things are happening in other areas, so for the Internet giants, it's time to optimize for profit.
Is that not a pretty good definition of not being evil. IMO it is still the time to explore not exploit, even if that's not what they're doing.
Tahrir Square was the high water mark of the old school techies. The failure of tech to effect real and lasting change really hasn't been understood by the techies, even still. That optimism about the future and tech's role in it, is gone.
The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs, and blinding Obama-era officials to the way these sites could be turned into tools of disinformation and repression.
When the media talked about the "Twitter revolution" I still remember thinking that there were people risking their lives on the streets and how ridiculous it was that some social media guys drinking lattes in their offices got the credit.
Before, there was such optimisim about tech. Nothing could stop it. Everything would be just better.
Look, the oppressed are rising up together! Look, medicine is getting better! Look, we're talking at each other, not shooting and hurting!
The arab spring was the high point, the proofed pudding.
After the failures there, sure, yes, tech has helped, has advanced the world. But that optimisim that was in Tahrir Square never came back. FB was a way to talk with each other and be a 3rd space, now it's a Skinner box. Wikipedia was the nascent Enclycopedia Galactica, now it's just mostly good and sometimes suspicious. Google wasn't evil, now it works with China to make Orwell sigh.
Things are chugging along, yes. But before people actually thought they could change the world for the better, now tech just has mortgages.
I think in the last few years is when things tipped to me distrusting Google more than the boogeyman of older times - Microsoft.
I'm not saying they're saints, but they've given me something free that's improved my life. Maybe it's ultimately greedy in the sense that later if I need a cloud platform I'll definitely use GCP. But I think that kind of mutualism is actually better in practice than altruism.
No, they haven't. They're just making you pay with a different sort of currency.
I'm not saying that's good or bad, and I'm not saying that you aren't getting value for what you're paying. I'm just saying that the notion that these things are "free" is incorrect.
Sure there isn't any company in the FAANG that is an altruistic enterprise, but to be only pure evil one is Facebook.
What really impresses me is that there's still a lot of talented people working there.
Blaming the platform for carrying fake news seems disingenuous. Fake news have been spreading over any available channels ever since humans learned to talk and figured out that they can tell lies to each other. Blame people for believing most of anything they're told.
Fake news in the past always had an identifiable source, because there was still an institution, a company, or someone with their name on the door between reader and publisher. As it stands, no such barrier exists any more. Things can be inserted by malicious actors into the debate, and they spread automatically simply because they have the tendency to 'go viral', something entirely absent in the past. That has added a completely new set of problems.
>Blame people for believing most of anything they're told
Precisely because it is very much in everyone's nature to suffer from these mechanisms it makes no sense to blame ' the people'. What does this imply, a great re-education of everyone? Obviously the only thing we can change is the companies, institutions and rules that determine how we consume the news, not how human brains disseminate them.
Even in the past, good old rumor mills (I heard it from a friend of a friend of a cousin's barber) were reigning supreme in spreading bullshit around, by simple word of mouth. The Internet here is just a compounding factor to something that is very, very old and already very, very effective.
Edit: expanded first paragraph
I totally agree, but it doesn't follow that this means that the fake news situation is acceptable, or that Facebook isn't responsible as a platform which the OP argued.
With the increased density of urban living came more opportunity but also more crime and disease, but we don't shrug and accept that barowners have no responsibility as platforms, we give them a set of safety and health regulations and responsibilities, and we equip the police with tools to combat crime.
So in that spirit, just as the internet isn't the internet of the hacker and small community age, companies should have to deal with the problems they produce. Just like everyone else always had to.
The disease is people being morons. If you want people to not be affected by fake news, making platforms censor people won't have any positive effect. You have to educate people.
There's no need to change anything, people or companies. Left to their own devices people figure out propaganda eventually. It may not be in the direction to your liking, but then, as a supposedly rational person, you must accept that maybe it's you who is victim of propaganda and the other people who are not.
The best example of this in recent times is the large number of supposedly smart people who fell in love with "The US President is a Russian spy" as an idea, which was based on nothing - it was rumours, it was fake news, it was propaganda distributed by the press, and now 50% or more of the US population agree with their president that it was also a witchhunt. Seems like people were drowned in fake news and still, a large chunk of them understood it was fake. Of those who still believe it, it might be more accurate to say they wish it was true - but that's a common theme in all rumours and propaganda throughout history.
How would you solve this problem?
I don't know which way is right.
I don't have a problem with business model (targeted ads) but I have a massive problem with lack of honesty, and this is the distinction between Facebook and Google for me. Google tells you what they collect and gives you the controls to delete it. This is enough for me.
Facebook struggles to remember that I want my timeline kept private.
I also believe that Cambridge Analytica was no accident, FB knew what they were doing, and they decided to throw them under the bus when they changed the media turned on them.
Trust is hard to build up and can be shattered in a day.
Come on. According to FB the IRA had 80000 posts over a two year period. In the same period there were 33 trillion FB posts. What moron still believes this garbage?
FB was hung out to dry by Congressional democrats too spineless to own up to their own pathetic failure to defeat Trump.
It is the desire to make this assumption (that Russia subverted the campaign) that drives the conclusion more than anything else. None of which is to say FB is innocent of blame. But their crime is hooking up an ad network to the social network, not colluding with Russians.
How could twitter, for example, really measure the impact of something like that video of the Covington High School kids, which was amplified on twitter (shared by a fake account, IIRC), picked up by the media, and then talked about incessantly for weeks, all over the place?
2016 - 1 total from math and engineering
2018 - 4 total from math and engineering
If you're curious to do some more research, here's the link https://www.cmu.edu/career/about-us/salaries_and_destination...
Hasn’t all of this stuff been obvious forever to programmers?
Yes, but it wasn't at the "oh crap elections were manipulated, democracies toppled, dissidents tracked down, and genocides enabled" level.
The fact that Apple, the world's richest company in the world, now has a mainstream marketing campaign around privacy tells you it is now officially mainstream mainstream, not just programmer mainstream.
Apple use privacy as an attempt to differentiate themselves from Google despite in the phone market having very little actual differences between them. It doesn't seem to have helped: Android is globally dominant.
> It doesn't seem to have helped
yet. Also Apple's goal with the iPhone isn't global dominance.
I don't think so, considering all the devs who called others "paranoid" before for raising these issues.
Out of curiosity what have they moved on to?
I have a friend that jokingly said (in a private group) that men are vile pigs. We knew she was joking - it was good natured. Yet, Facebook issued her a warning and removed her post and threatened her with a ban. First they came for Alex Jones and I said nothing because I don't like Alex Jones (and think he's insane), but now that the precedent is set that Facebook is the speech police, it will expand to us all (especially with their machine learning advancements that are here and yet to come).
The EFF has a really important article about this that I implore everyone to read.
FB has its problems, but I generally find the negative press overstated and wonder if Zuck's approach to interact with the press and congress actually backfires (compare to the other companies which largely ignore them). I appreciate how often he talks to the press to explain what they're trying to do though.
I also see the Cambridge Analytica scandal as what it is - permissive APIs that were abused and then locked down. Cambridge Analytica is to blame in this for abusing TOS and behaving badly, FB is arguably negligent - but I think the reaction is extreme.
Plus from people I know inside FB there really is a huge funded effort to stop abuse and manipulation via 'integrity' teams. It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.
I don't see any nuance, I see invented complexity masquerading as something sophisticated.
This article is written as if they're trying to suss out the perfect boundaries along which to apply censorship. That frame of thinking is a con. They will never have a perfect censorship implementation because there is no win state.
Let me back up: As you read this, facebook has a censorship policy. What would it take for facebook to be know they're done arguing over what does or doesn't get deleted? How do they know when the censorship policy isn't good enough? Advertiser pressure, press pressure and political pressure. i.e. fashion.
It should have been obvious to everyone present at (or read about) the meeting in this article where Facebook attempted to invent a principled stance that allows casual misandry while banning similarly-tempered casual misogyny. But I'm perfectly willing to believe Facebook can't see it. I can explain.
Facebook's stance towards nudity has a very clever property (that is almost certainly unintentional). It allows users to lie to themselves. Facebook could automatically hide nudity to everyone who isn't an opted-in adult, and still throw it behind a twitter-style click gate so there is no accidental NSFW at W. But they don't. They'd have to add an "I'm not a prude" checkbox. And THERE'S the rub. The lack of such an option lets people uncomfortable with nudity tell themselves that they're not the kind of person that's uncomfortable with nudity. The want to think they're sex-positive enough to have nudity, and just a reasonable person who doesn't mind if it happens to be banned. Even more importantly, it lets people them avoid thinking about the question "am I so uncomfortable with nudity that the idea of other people - the WRONG PEOPLE - seeing it makes me uncomfortable?"
A lot of this is detailed in the article, it's helpful to read it.
I'm not saying they are.
>it's reasonable for them to have some editorial control over what's posted to their platform
I'm not saying it's not.
I'm saying that their editorial policy is (in part) driven by fashion instead of principle. And complexity is used to obscure that rather than reveal it. My attempt to use Occam's razor to explain the obfuscation leads me to conclude there must be some utility that caused the system to evolve in such a way that it's possible to avoid seeing/acknowledging that.
Since when Zuckerberg speaks, he always seems to be equivocating if not outright lying, and his historical responses when Facebook has been called out for being abusive in the past has always been lots of promises with no real changes, I suspect so.
> It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.
That entire effort is Facebook's attempt to change the topic to something that is less threatening to them. The real issue is the privacy invasions from Facebook itself. Everything about their "pivot towards privacy" is premised on the privacy threat being actors external from Facebook.
They got by on being the new darling child startup and built up an gargantuan pile of moral debt which they are now fairly paying for.
I would have thought a private group would be immune, but apparently not.
The common reason I heard from most of my friends who turned down FB, or quitted FB was that the working culture is too demanding and kind of pressure. Google on the other hand is more laid back and family friendly. So people who started building a family will prefer Google over FB. The nice thing is FB tends to offer higher level than Google, so in some cases, if you get matched, it works out pretty well.
I have a friend who worked at FB, after he came back from paternity leave, his manager told him he has been slacking (his reviews were always "meet all"/"exceeding" before), it's time to put in more work, he quitted after a month.
That being said, I share your perception that Google and Apple are cooler than FB. :)
No one rallies against the US Census as unethical
If it's being done without the informed consent of the user whose data is being collected, then yes.