This is a romantic vision that the companies themselves like to perpetuate, but it's no longer true for a decade or more. It's irrelevant what the Facebook employees think, just like it's irrelevant what the employees of McDonalds think about the health of their customers. They are not unique, precious snowflakes of rare skill, they are replaceable cogs in a machine that exists for a single purpose, profit. Massive IT education programs now underway everywhere in the world will ensure that the corporations will have vast masses of foot soldiers so they can concentrate on what matters: creating competitive advantage and market dominance using the strong network effects technology affords.
Silicon Valley has bred an ultra-aggressive type of capitalism that will crush and automate the old competitors away but will no longer redistribute the wealth to the workers, as capitalism has done for the last few hundred years, because it no longer needs them. In this economic war, developers are rich mercenaries, not noble freedom fighters. The likes of Facebook and Uber are simply the expression of that social reality, a glimpse into the world of tomorrow.
If 'massive IT education programs' had made engineers disposable, then salaries would be a lot lower.
(If you need 'data' on this, Glassdoor says FB SWE salaries are >120k, total comp will be a lot higher. Compare that to a job with an actual disposable workforce.)
>In this economic war, developers are rich mercenaries, not noble freedom fighters.
If you said developers were poor mercenaries, that'd be one thing. But if you are rich, then you can choose to work somewhere compatible with your morals.
And if you don't make that choice, its on you.
It appears that we have vastly different opinions on how capitalism works. Capitalism needs to be coerced by laws into giving back to the workers (taxes on companies etc.) and more often than not, companies try to circumnavigate these laws.
Nozick made a classic argument about how naive concepts of social justice are incompatible with a system in which everyday economic transactions made by free will can be honored. He used the wealth of Wilt Chamberlain as an example (although his argument was actually concerned with wage ceilings, the same thinking applies to wage floors).
You can insert the words "for example" before "basketball player," or fill in another profession if you need this to be an argument. I wasn't really trying to make one, because the burden of argument isn't on me.
When someone claims that capitalism requires coercion in order to pay high salaries, the burden is on them to explain why so many high salaries exist, sans coercion, under capitalism. Or why wages have generally been higher in capitalist countries.
Another problem with wage floors (coercing higher wages) is that -- to the extent they are effective -- they inevitably create surpluses. For example, requiring every fast food worker to be paid $30 an hour would create a shortage of fast food (by the economic definition). Requiring every programmer to be paid $200,000 would create a shortage of programming.
That's not to say the income distribution aspect is not important, it's another side of the same issue of class relations. But I don't think a claim can be made that capitalism itself is a strong factor of income redistribution. And in the short run, distribution is the only thing that maters to workers.
Crucially, capitalist redistribution, to the degree it works without state intervention, is strongly related to the value of the human capital each person is born with or can acquire in life - it's no coincidence that massive public investments in education were strongly related to increasing equality. Once human capital becomes relatively devalued by thinking machines that can do more and more complex things ever cheaper and faster, we will pass biological limit when workers no longer have anything to bring to the negotiating table: they need the products of capitalism but can no longer capture a meaningful fraction of the income in relation to capital, nor can they amass significant capital due to diseconomies of scale.
The feudal model is thus appropriate: all-powerful lords waging war over turf, a cast of mercenaries fighting the war (knowledge workers and specialized professionals) and a vast mass of peons.
I was trying to make an argument that in capitalist countries without worker protection laws like minimum wage, holidays, worker safety regulations, retirement etc (think China, industrialization-aera Europe), workers tend to be exploited by their employers, rather than receiving part of the wealth they help generate.
As long as we can define capitalism as the freedom to buy and sell legal goods and one services on whatever prices/terms one wishes, my own conclusion has been the when companies are in a position to exploit workers it's typically because of a lack of free enterprise and not the other way around. I can see why people who think the cause is free enterprise would want to further regulate.
But inevitably when you find powerful exploitative corporations you can dig a little and find that it's usually because they are not operating on a level playing field... the huge exploitative entity is only possible because someone's economic freedom is being restricted somewhere. Typically by writing regulation in their favor -- both in industry-specific ways and with corporate law in general.
The thing to do is to strip away their unfair advantages, not to allow the beasts to exist and "regulate". Because small businesses have a hard time coping with the cost of even well-intended regulation you can end up having the opposite effect. And the regulators end up becoming complicit.
Zuckerberg Taunted Employees With Samurai Sword, Ex-Facebooker Says
> Massive IT education programs now underway everywhere in the world will ensure that the corporations will have vast masses of foot soldiers so they can concentrate on what matters
Not yet, source most FB engineers get paid a lot.
This is a mistake I've seen made by government officials before: The idea that a tech worker is a tech worker is a tech worker.
A Java developer with expertise in Shibboleth is not going to fit right in like a replaceable cog as an iOS game developer.
In short: Facebook does care about user privacy a lot. Some of the worst pains we had in platform development at Messenger were related to convincing privacy team to let us open more info to developer. And we usually fail. There was and is a strong sense of internal paranoia around leaking any user data, even if anonimized.
Whenever there’s a chance of a data leakage of any sort, everything halts and the whole team is in on it. I personally spent weeks of my life in such meetings, redesigning whole parts of the product to avoid even simplier things like anon user tracking by developers with no explicit consent etc.
So when I hear people saying that Fb intentionally does something bad or doesn’t care - those people really don’t have clue.
Current situation is a byproduct of two things: 1) legacy decisions (that were reverted in 2015) from when the company was younger and more open and didn’t have those protocols and 2) just the very sensitive nature of this data. This is a social network. What would you expect it to give developers from it’s API? If not your info and parts of your graph. Even those days are over now due to (imo) unjustified paranoia.
Ads targeting is another example of that. The fact that you see targeted ads _does_not_ mean advertiser knows _anything_ about you individually. It’s fully anonimized for them!
And no, there’s no evil intent to make the UI more confusing to get more of your data. It’s the opposite at that point, Jeez, folks, take off your tin hats. Many things to question Fb about (most notably product innovation) but here they are doing their best.
Things have changed? What about that Real Name Policy?
Anonymized data? The world of data brokers might disagree.
The shady world of farmers and affiliate marketers rely on the ease of Facebook data mining.
Tin hats? Facebook has relied too long on users not understanding what happens with their data behind the scenes.
Personally identifiable or not (trivial matters) the fact that Facebook and its devs fail to see the harm that this platform is causing from its gluttony of user data is mind boggling.
Now imagine we are drawing up the ethics code for that profession - is what facebook did something that could reasonably be banned? Is there direct harm involved?
In my (very personal) view Facebook is just an example of an externality - this loss of control over data is a form of pollution, and the costs are rarely direct harm to individuals, even though society as a whole seems to be being harmed.
So rather long windedly, i don't think this is something to beat up on individual Facebook employees - this is something for society level regulation - look to the GDPR and its successors
Medical ethics jumping off point: https://en.m.wikipedia.org/wiki/Medical_ethics
None of it gives Facebook engineers any comfort.
They are responsible for the fruits of their efforts. Recent revelations are of degree, not kind. The consequences of engineering work were reasonably foreseeable and the consequences were not at arm's length. Facebook's business model has been clear and well-understood for years and years now. This is not technology being misapplied by someone several steps removed from you. This is technology misapplied by Facebook and Facebook's immediate customers and Facebook engineers enabled it.
Technology work has dimensions beyond "it's interesting" and "they pay me a lot". I hold us as professionals. We have duties which go beyond our own self-regard.
I am unsympathetic to anyone at Facebook feeling moral disquiet. I think you know what you ought to do.
Look at medical ethics ideas of "do no harm" - or the more modern interpretation "do more good than harm"
It is perfectly arguable that Facebook has been doing more good than harm. And that is not something one should ban under a code of ethics.
I personally do not think that Facebook's actions have been correct, or worthy or defensible. They have hoovered up everything and left it lying around for anyone to collect - it breaks every Data protection rule set since the 1980s (due care etc). They can and will pay at some point down the line.
But data is like ... oil. It has massive externality costs, but wow has society benefited. From the data footprints we leave we can (if it is well managed(!)) expect improved medical research, improved mental well being, reduction in crime, improved trade, greater econmic equality. Yes there will be costs. And we need to manage those as only externalities can be managed. But the mass collection and processing of our PII data is not just negative cost.
Edit just to be clear this is not an attempt to justify ends with means, or defend facebooks actions - but to note that just because JP Getty was unethical it did not stop ambulances getting petrol engines, generators lighting hospitals and plastics encasing artificial hips.
I don't work at Facebook and I actually don't know what Facebook employees ought to do, but I'm curious to know what you think about that.
Resign? Blow the whistle (to a society that really, really didn't seem to care until last week)? Try and find ways to put pressure the non-engineers in charge of product decision to do something? Unionize? Something else entirely?
Also, are you talking about all Facebook engineers or only some? Do you put those who work on React, Flow, Haxl, Hack, ReasonML, Mercurial, Occulus, etc. in the same basket?
Edit: Thanks for the link. Good read.
> 1. PUBLIC - Software engineers shall act consistently with the public interest.
> 1.01. Accept full responsibility for their own work.
Media tries to blame brexit and trump on fb instead of realising that a large portion of the population actually want trump, brexit, no immigration, muslim bans etc. Hey, but it is easier to think that the people were tricked into voting like that.
Note: Surely this doesn't account for everyone that voted for Trump, but it is possible that it made enough of an impact to change the outcome.
Have you seen Nix's presentation at the 2016 Concordia Summit?
For what targeted ads lose to fake news with having to stay clean, they regain with psychological accuracy.
The book claims strongly that Facebook is a communications company, who want to help (not force) people be more social, and that advertising is purely a side effect intended to enable them to achieve that mission. It was quite cliche, but overall I got the impression that it was honest, I really do think that Facebook the company, and by extension, many of their employees do see themselves like that.
I personally find that to be a little naive, but then I am more sensitive to privacy issues than the average person.
Basically her response was that the whole incident really isn’t even Facebook’s fault but rather the third party developers. Even though Facebook could have done a better job of protecting data, it’s a waste of time to do so as these companies will always find a way to violate the TOS. It’s a bit like fighting software piracy or ad-blocking, hackers are always getting a step ahead. No company is perfect, despite everything Facebook still does a lot of good in the world, and as a company it’s going to keep doing more of it. There’s a lot of people who hate Facebook for personal or business reasons, but in the end there’s no fundamental reason why Facebook is a bad company that shouldn’t exist, it’s only going to get better anyway.
Advertisers being able to target so effectively, by salary, by socio-economic status, and by hundreds or even thousands of metrics. I think that's still bad, and I don't think I could work on a product like that.
not too hard to understand the creep factor for most people.
I'd prefer to get the mainstream to back things like Masotodon (openSocial), Matrix, and Scuttelbut, than call for regulations, but we have to be educating folks about the benefits such federated/distributed networks.
Yes, 3rd party devs are responsible and yes Facebook is too. Facebook's business model and algos feed the ad tech machine. Saying that FB is just a scapegoat is some kind of twisted tomfoolery.
You mean "their employer"? Or maybe most have some small % of stock, but to me my tiny-% stock holdings do not make those companies "my companies".
This whole big data thing along with all these related jobs (big data engineer, machine learning engineer, data scientist, etc) - what do people this was all about? Why do we have all this data now that we didn't before - it's because everyone is tracking stuff. So where should we all work if we don't want to be part of this? If you're not doing this directly, you're probably paid by someone who is.
> Besides, it's the customer's fault if invite us inside their home and willingly hand over the keys to the cupboards.
"Aw, but everyone else is doing it!" has never been the basis of a respectable system of morality. It's playground ethics.
It might not be a respectable system of morality, but it's a popular one.
My personal opinion is that the problem is not so much with Facebook but with the legislation. Corporations like Facebook are there to make profit within the rules defined by society. I hope there will be stronger laws to protect our privacy and data, and generally more awareness on the issue.
They seem to be having their cake and eating it. You hear the term "disruption" all the time from tech companies, and you'll hear tech leaders wax lyrical about their vision for society and how their work/product/service is shaping that.
Recent events have outed these people for what they are. Self serving profiteers who only care about how society can help them, and have little regard for how they can best serve society.
While shady corporate megalomaniacs are a problem, they are by no means unique. What leaves such a bad taste in my mouth however, is the way tech billionaires are so eager to project the idea that they're benevolent agents of progressive change, when really they're not and they're not even trying. They've been dining out for free on goodwill for too long. Now it's time to start paying the bill.
Never has a business been able to properly choose the correct advertisement to show an individual at the exact moment they're looking to purchase an item to maximize likelihood of purchase.
It's incredible and we've never dealt with something like this. We'll get better at understanding and education of the issues but not without major setbacks.
So, yes corporations can legally make a profit within rules defined by society. However, whether or not corporations are in a sustainable position is entirely up to how they're able to manage long term viability with profits (short and long term).
This holds true as long as those corporations do not influence what the law looks like (i.e. they are the results of a democratic process). The question is whether that is the case for Facebook.
It's absurd to outlaw everything before it becomes a problem.
A lot of the data that Facebook gathers in their dragnet, another purpose-built company could ethically use to solve a problem for you.
Making fun of "the other side" through strawmen and irony will earn you social points among people with the same views, of course. meanwhile, people that you make fun of will think before venting their opinions the next time among you or on this platform in general.
What it will not accomplish, though, is changing anybody's minds. Just make the forum more of an echo chamber which creates an illusion of political consensus.
Do you want to leave in an illusion? Do you think that bullying people of opposite political opinions into shutting up helps your political agenda in any way? Or, may be, you're setting yourself up for another big surprise come next election?
Making fun of their self-victimization is not bullying either. Do you really think Palantir employees are a group that needs to be protected from semianonymous Internet comments?
Can we talk about it without value judgements whatsoever, please? I don't think that we'll come to terms about who's a victim, who needs protection, and so on - but most of all, I think that these questions are inconsequential and boring.
What I'm interested are the objective things that we can agree on regardless our political stances. So, I want to ask my question again: do you think that your comments help your political cause in any way?
Your original comment calls HN an echo chamber and suggests that people who work at Apple, Uber and Palantir are afraid to express their political views. That's entirely about feelings. What specifically was it that you wanted to discuss there?
Maybe it's not helpful to make a joke of it. But I honestly think it's a ridiculous notion that people with six-figure salaries and stock options are somehow being discriminated against — in a society that's built to support them at every step.
I only ask about things that matter personally to you. It's about your rational self-interest, and nobody else's. Do you want to be in an echo chamber where everybody has the same political opinion? Does it really benefit you? Does it benefit your political agenda to push out and silence your political opponents?
Your original comment talks about echo chambers and insinuates that people at Uber or Palantir "know better" than to discuss their political opinions.
Was that a good faith attempt to engage in discussion? What kind of response did you expect? How does it benefit you to come in and say the equivalent of "you all suck and wouldn't listen to me anyway"?
> What kind of response did you expect?
Something like this.
> How does it benefit you to come in and say the equivalent of "you all suck and wouldn't listen to me anyway"?
I don't hope to change your mind. But I want to change the mind of all the people reading this thread.
Anyway, hesitating to say something is not always the best idea, maybe it ought to be said aloud while everybody hesitates.
People are social creatures, programmed to respond to approval and disapproval from others. This includes replies as well as votes. It's irrational, but that's who we are.
Now, try registering an alternative account on HN and write political comments, polite, articulated, and reasonable, from a conservative point of view. What do you think the response will be like? Will you feel encouraged to proceed, or will you feel (emphasis on "feel", as these things are emotional and not rational) that you'd rather not continue with your point of view?
I think Fb would be known in history as a major data provider to AI bullshit acts.