Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always wondered how such discussions go in company meetings where some product/feature has harmful effect of something/someone but is good for the business of the company.

I cannot believe that everyone is ethicality challenged, only perhaps the people in control. So what goes through that minds of people who don't agree with such decisions. Do they keep quiet, just worry about the payroll, convince themselves that what the management is selling is a good argument for such product/service....

Luckily I've never had to face such a dilemma, but can't be envious of those who have faced and come out of it by losing either their morals or jobs.




> some product/feature has harmful effect of something/someone but is good for the business of the company

If you start with such black-and-white assumptions, you will never be able to actually empathize with those people. Nothing is that simple when you're close enough to see the details.

Things good for the company should be and frequently are good for the people using the product. The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.

There's a whole back and forth about Facebook and political divisions. It starts with someone assuming that tech companies put people in bubbles and echochambers, assuming they'll only be engaged with stuff they agree with. Then you run the numbers, and realize that people are far more isolated from opposing opinions in real life than they are on the internet, you interact with more people online, and they censor themselves less. But at the same time, you can change your mind about echochambers, and decide that this is a bad thing, being exposed to different opinions makes you more entrenched in what you actually believe.

It's never as simple as "this is bad for everyone except us but at least we're getting rich". Everything has more nuance than that when you experience it up close


> Things good for the company should be and frequently are good for the people using the product. The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.

> It's never as simple as "this is bad for everyone except us but at least we're getting rich". Everything has more nuance than that when you experience it up close

This too needs more nuance. These points even apply to outright crime. Legal prohibitions should sometimes be expanded in the public interest, because sometimes it essentially is the case that something is bad for everyone except some small group.

This is reflected in the way data-protection laws now exist in many countries, for instance.


People are more isolated in the real world? Please provide a source. Aside from the fact that this is hard to measure now that the underlying medium has itself been modified — I would hardly expect this to be the case. Online I am connected to those whom I socialize with or am otherwise professionally connected to. In the “real world” this constraint is largely absent.


This is the hardest source I can find, but it only measures what happens on Facebook. The numbers do seem higher than what I'd expect for IRL conversations, though:

https://research.fb.com/blog/2015/05/exposure-to-diverse-inf...

> Online I am connected to those whom I socialize with or am otherwise professionally connected to. In the “real world” this constraint is largely absent.

This seems entirely backwards to me? Maybe you talk more with strangers IRL than online, but I doubt it. I only have n=1 (me), but we are talking right now. Who knows where we live in relation to each other?

So much of politics is split between urban and rural environments. Those groups are defined by where they live, so I expect very few conversations in person between the two, especially about politics.


Thanks for the link. Reading now. Regarding my reply, I was thinking more about social networking apps like Facebook, Instagram, Snapchat, WhatsApp, or linkedin and less about hackernews/reddit types. Mainly because I think the bulk of social interactions happen there.


It does seem logical: your in person interactions are mediated by your personal relationship with people. Online you can come across anything and everything. The in person equivalent would be walking by ten or twenty small protests set up with megaphones loudly arguing for various things you vehemently disagree with.


I highly doubt you and I are socially or professionally connected and yet here we are.


This connection doesn't mean shit compared to someone you see face to face and share experiences with. Yet this watered down form of connection seems to have replaced the latter, which I think is the fundamental social problem of the internet.


Does it matter the quality of the connection? The argument is about being shown different viewpoints and that the internet shows you more than in person.

Is that hard to disagree with? I didn’t even know atheism was a thing until I was on the Internet. No one in my community was an atheist and the media we were provided didn’t reference it much.


I think quality is almost the only thing that matters.

Personal anecdotes aside, we're mostly terrible at dealing with new ideas when they conflict with stuff we already know or is close to our identity. Remove the human element of the connection and we're even more likely to dismiss said conflicting ideas outright as stupid (I'll try link to that research). It's not hard to imagine how that might lead to strong yet poorly justified social division.


> In the “real world” this constraint is largely absent.

In the real world you are connected to people living and travelling around you, and that is not necessarily an unbiased set of people. It can be quite far from the average random group. You're still in a bubble.


yes, it's never simply black-and-white, but you're overstating that case, especially with facebook. by now, nearly everyone in tech and many adjacent industries (e.g., entertainment) has heard about and probably internalized the downsides of facebook, particularly the mechanisms and tactics employed to advance facebook at the detriment of society at large. it's pretty clear many of those people at facebook are avoiding or ignoring inconvenient truths when it comes to removing those mechanisms and tactics to the benefit of society at large but at the detriment of facebook.


That's not a counterargument. Nuance doesn't contradict the black-and-whiteness of the situation. Sometimes nuance just means there are many shades of black.

The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.

It might be impossible to 100% disentangle. But it is nonsense to suggest it could ever be impossible to >0% disentangle. And they have a moral obligation to prioritize disentangling them, to maximize the good and minimize the harm, and to structurally incentivize themselves to succeed at that.

But your attitude creates the exact opposite incentive: the more entangled the good with the harm, the more defensible it is for them to passively enrich themselves thru their inaction.

Don't fall for it. Demand more.

Demand structural changes that incentivize real fixes, for example, pledging that ad revenue from hate content and fake news be returned to the advertiser and the same amount also donated; or pledging that feelings of community vs feelings of divisiveness affect executive or company-wide bonuses. These particular ideas might be stupid, but don't let them get away with not even trying.


> Things good for the company should be and frequently are good for the people using the product.

I think there's a misalignment here. In traditional business what you said may be generally true (with some striking counterexamples like cigarette companies). In internet advertising things good for the company should be and frequently are good for the company's customers. Facebook's users are not its customers, and Facebook is generally incentivized to keep users on the site and consuming content (and advertising) by any means necessary - regardless of the long-term harm it might cause the users.


I've been there, obviously not to the level of a facebook board member.

IMO the feeling is not really that different from making choices as a consumer ("was this shirt made by child labor?", "was the animal this meat comes from treated humanely?", etc). People tend to turn a blind eye to those questions unless something comes up that hits close to home.

To be clear, I'm not saying that's justifiable or a good mindset to have, just what I think happens.


I disagree and think it is significantly different. Facebook decision makers have way more agency in the directions their company takes than a consumer has in their choice of clothes to buy at Target (or wherever).

Shirt consumers don't have much of a choice. They can only buy what's for sale (and in their price range). And then, how can they be sure if a shirt was or wasn't made by child labor? How would an individual consumer's behavior lead to ending child labor?

According to the article, Facebook execs understood what the product was doing, and, while they have the ability to stop it, don't. Maybe I understand what you're saying if we're talking engineers/middle managers, but that's a boring conversation. The buck has to stop somewhere.


Are you seriously arguing that consumers can't spend $5 less on a shirt so that instead of having "BALR." it was made under less shitty conditions? Consumers have plenty money for t-shirts, they just choose to spend it on fashion statements instead of thinking about working conditions of people half a planet away.

There's plenty of choice. It's not about choice, it's about what's on your mind, and what you put on your mind. If you want to look cool, you put the working conditions concern off of your mind. If you want to make money, you put the division concern off of your mind.

The buck stops at every stop.

edit: did a quick google, first result on a plain white t-shirt that's fair trade is $25, first result on 'fashionable' plain white t-shirt (by balr or supreme) is $60...


Basic economic theories require that consumers have full information and make rational decisions. Neither of those are valid assumptions.

In this case, the vast majority of people don't know if a shirt was made with child labor or not. If this information was clearly communicated to every consumer I'm sure you'd see consumer behavior change to some degree.


I actually feel the opposite. Consumers have the ultimate choice -- their choice is not beholden to anyone except themselves. Then they can execute their choice unilaterally.

A VP or even the CEO is beholden to shareholders, their employees, their advertisers, their own ethics, their users, various government regulations (and government interests that are not laws but what they prefer). So almost everything they do is a tradeoff.


What a cop out. You can't just pass the buck forever. You want to bring shareholders into this? Was exploiting the human brain’s attraction to divisiveness put to a vote? What does it matter when Zuckerberg has a controlling share of the company [0]? He answers to himself.

Facebook spent almost $17MM in lobbying efforts last year [1]. I wonder why governments doesn't exactly have an eagle eye on this...

The rank and file employees at Facebook have no say about this. Tim Bray leaving Amazon to no ill effect shows this.

We're talking about Facebook exploiting the human brain to increase time on the platform. The users have little to say about this, and as long as the users are there, advertisers have nothing to say to Facebook.

So that leaves Facebook answering to their own ethics. Yes. that's the problem.

0 - https://www.investopedia.com/articles/insights/082216/top-9-...

1 - https://www.opensecrets.org/federal-lobbying/clients/summary...


A corporation is a device for maximizing profit and minimizing ethics. Everyone can say they're behaving ethcially. Consumers can say, "Well, all my friends are there, I can't quit," and it's true for some people. The CEO and other decision-makers can say, "Well, I have to do this otherwise the shares go down and I could get fired," and they may be right. Shareholders can say, "I'm just investing in the most profitable companies, if they were doing something bad, it should be illegal," and they have a point too.

This is where governments come in. Companies should behave ethically, but ultimately we shouldn't just leave it up to them. That's why societies have laws. What we really need to do is use regulation and penalties to force Facebook into ethical behaviour.

Of course, this isn't going to happen because there's no political will to do so, generally due to "free speech" or "free market" objections.


This is not passing the buck. It's acknowledging that there are many stakeholders involved in a company+platform, and that many decisions are about making tradeoffs rather than having a "right" answer.

If you always go with the populist vote, like when users rioted about the news feed when it was first introduced, https://techcrunch.com/2006/09/06/facebook-users-revolt-face... then you may be sacrificing the long-term viability of your company. This harms employees, investors, and eventually the public. Are you saying that's not even a consideration at all?

We're not talking about "Facebook exploiting the human brain to increase time on the platform". You brought up Target and shirts. So we're talking about who has more agency, users or executives, in a general manner. That consumers generally only need to concern themselves with their own ethics, versus the complex entanglement of ethics at a company, gives users more agency to make choices reflecting their ethics.


Why couldn't you choose where to buy your shirt. Shirts can be made anywhere it should be one of the easiest to find multiple venders for.

If you are saying at walmart or another big place they only have 4 brands in your price range and how can you tell which ones involve child labor. You could research if you cared.. by not buying a brand you reduce your risk by 99%.


As consumer, you may not be able to stop child labor but you can vote with your wallet.

Several of my friends buy clothes from a few vetted brands because of exactly this issue.

Then I have another friend who was huge cruise ships fan. He encouraged me to go on my first cruise too. But then there was a report about mistreatment of cruiseship employees, and he is totally against cruiseships now. His actions probably won't change anything alone but if enough consumers start to act like him, a change may happen.


I often wonder. Even if people stop buying, the feedback signal to a company can be very inefficient.

They might not understand where they went wrong and think they need to lower prices or something. Of course, that just leads to more pressure on working conditions.


Probably will do two things.

If he spends that money locally it helps the community.

Cruise ship will treat employees worse to make up the shortfall in cash. The Cruise ships industry needs a tell all netflix movie to change things.


This kind of thinking, looking behind the veil of money, has convinced me to stop using currency altogether, for now, for the most part. I still pay for web hosting and domains, I still buy bottled water for lack of better options, but for anything else like clothes, food, houseware stuff, etc., I've stopped buying altogether. Everything you buy carries a huge veiled cost of human health and lives, animal and plant health and lives, environment damage, habitat loss, and so on. I just don't want to be complicit anymore. I wear the same clothes, and I pick up the clothes people leave in boxes on the street or go to churches. There is a glut of consumable goods and the charities are throwing tons of it away everyday. Same goes for food, kitchenware, paintings, decorations. I've been told my great-grandmother used to say, "God gives you a day, and then food for that day." That is the approach I have taken. Went for a walk yesterday, found two paintings. One of them needed finishing, which I'm happy to do. For 3+ years, I have not used any "external" products like shampoo, lotion, cream, etc., not even soap, except occasionally buying a bar of dr bronners soap (paper wrap) and using that for laundry. Almost everything in that department, even the "organic" or "natural" or "eco-friendly" has a long ingredient list full of what I want to avoid both putting on myself, as well as drinking, which is what's going to happen if I put them down the drain. Also, all of it fucks up the skin biome. I've not had any skin problems since I unsubscribed from them. And so on. I know it's not an option for everyone, but it's the only option for me, as long as I have a choice, to choose this way, and keep pondering how to do better every day.


Where do you get free food?


I live in a city, so mostly from dumpsters. Tons of recoverable food is thrown out every day. Way, way more than I can figure out what to do it.

I've also gotten more into fasting and eating less, but so far, no involuntary fasting has occurred.

I've also become more social, so sometimes others share their food with me, even in these difficult times. Yes, they bought it with money, and fed the eco-shaver, but I think it's still less than if I'd done it myself.

Occasionally, I go to restaurants towards closing time, and ask if they have any leftovers they are throwing away.

A great book on all this I read on this is called "The Scavengers' Manifesto". I learned a lot from meeting others on the street and looking through the trash.

I've done a bit of foraging when in wilder areas, and I've seen places where people grow most of their food themselves, in small communities. I think this is the future.


It grows in the ground, or around the bones of other living creatures.


I just wanted to say that's awesome and you're my hero. :-)


That may apply in many cases, but I don't think an engineer or manager at Facebook can use that excuse. They'd have lots of other options.


I think what an FB exec is trying to decide is more analogous to "should we use child labor to make our shirts?" or "should we incur higher costs to run a humane farm?"


From my experience there are very strong currents in a group that are very hard to go against as an individual. Only very contrarian people will go against the grain in formal meetings with high level executives or other individuals with status in a group. This is why often big organizations are able to produce decisions that the team behind it doesn't agree with and that look silly from the outside. Many people in such a team will not feel personally responsible because they feel like they didn't have any influence on the decision making proces, even if they could have said something. There are other dynamics at play I think, but this is one of them. (The contrarians seem to not survive long in the corporate world)


This dynamic is present in FB the website as well. You find clusters or groups of folks who re-amplify a point. It's so effective that you can find "Re-Open" rallies in your state driven by a shady "gun-rights" nonprofit. Even though polling largely supports the lock down and actions taken to curb the pandemic. You also find that outside the group people are a lot more nuanced and reasonable. It's fascinating. What is even more concerning is that a lot of bots drive this behavior.

I think the issue is that in the long term it dilutes FB. I know many people who don't post on FB, preferring Instagram etc... I know these are still FB platforms but it's a big shift. So FB will eventually become Usenet and effectively non-functional.

There's some type of social network that's between Instagram and FB that doesn't exist yet.


Also, IME, if you do say something, others jump down your throat quickly and viciously. I still remember this one former cow-orker and his words: 'they debate, they decide, we deliver': this project ended up losing the company millions and left it as a has-been in ecommerce because people chose to accept and support the utter insanity that was going on right in front of their faces.


As a programmer I am not responsible (and paid) for management decisions. It is also not my job to fix toxic culture in a company.


As management is responsible for bad management decisions, so too is the programmer responsible for implementing bad decisions.


Programmer can still implement bad decision very well :)


> I cannot believe that everyone is ethicality challenged

No, but it's not always clear what the ethical choice is. In philosophy, this is known as pluralism [1] -- the fact that different people have irreconcilable ethical views, with no way to find any "truth".

That might seem like a lot of justificatory mumbo-jumbo, but there are genuine ethical arguments on all sides. For example, did you know that in the postwar 1950's, the lack of polarization and divisiveness in American society was seen by many as a major problem, because it didn't provide enough voter choice between the two parties? [2]

There are also plenty of ethical arguments that giving people what's "good for them", rather than what they want (click on) would run counter to their personal autonomy, and therefore against their freedom. This is what critics of paternalism believe. [3]

Then there's the neoliberal argument that markets always work best (absent market failure). That most of human progress over the past couple of centuries has resulted from companies doing what's most profitable, despite how non-intuitive that is. In that sense, Facebook doing what makes the most money is ethically right.

I'm not saying I agree with any of these -- in fact, I don't.

But I am saying that supposing there's some kind of obvious right ethical answer, and implying bad faith towards people at Facebook that they're somehow making decisions they genuinely believe to be wrong but making anyways, is not accurate.

[1] https://en.wikipedia.org/wiki/Pluralism_(political_philosoph...

[2] https://newrepublic.com/article/157599/were-not-polarized-en...

[3] https://en.wikipedia.org/wiki/Paternalism


> For example, did you know that in the postwar 1950's, the lack of polarization and divisiveness in American society was widely seen as a major problem, because it didn't provide enough voter choice between the two parties?

There was not a lack of polarization and divisiveness in American society.

The divides in American society and politics didn't map well to the two major political parties because there was a major political realignment in progress and the parties hadn't yet aligned with the divides in society.

The problem was the divide between the major parties not being sharp on the issues where there were, in fact, sharp, polarizing divides in society, preventing members of the public from effectuating their preferences on salient issues by voting.


In the 50s and 60s, there were really four parties, joined into two by coalitions. On the Democratic side, there was a social democratic, leftist faction, tensely allied with a Southern party (the Dixiecrats). On the Republican side, there was a pro-corporate but moderately liberal faction (the Rockefeller Republicans) allied with a harder-line conservative/liberatarian faction (the Goldwater Republicans).

Two things happened in the 60s and early 70s: the Goldwater faction largely took power in the Republican Party, and because the Democratic Party embraced civil rights, the Dixiecrats first flirted with independence (George Wallace's campaign) and then gradually switched parties, so now we have the oddity that there are people who fly Confederate flags but are registered members of the party of Lincoln. Many people who would have been Republicans in the old days are now the moderate/neoliberal faction in the Democratic Party.

So we still have four parties, they were just reshuffled. Now the tension in the Democratic Party is between the old FDR/LBJ new deal supporters, and their younger socialist allies, and the more pro-business neoliberals. On the Republican side it's between the business side (they don't care much about ideology, they just want to make money) and the hard-core conservatives.


So are you saying polarization makes it easier for people to vote? It sounds plausible and undesirable.


> So are you saying polarization makes it easier for people to vote?

No, I'm saying that the description that polarization was absent is wrong.

I'm also saying alignment of the axis of differentiation between the major parties in a two-party system and the salient divides in society makes it easier for people to make meaningful choices, and feel they are doing so, by voting.

When there are sharp polarizing social/political divides, as there were over many issues in the 1950s, and they are not reflected in the divides between the parties (as they often weren't in the 1950s), then the government cannot represent the people because the people cannot express their preferences on important issues by voting.


I am sorry to say, this seems like a thoughtful answer but there is a lot of nonsense in it is as well.

For example, pluralism doesn't state there is no way to "find truth", but that in light of multiple views, to have good faith arguments, avoid extremism, and engage in dialog to find common ground.

> but there are genuine ethical arguments on all sides.

These ethical arguments, however genuine they may be, are not equal however, otherwise, you would be falling victim to making the false balance fallacy, commonly observed in media outlets, or the "both sides" argument we have so unlovingly become aware of in recent times. The False balance fallacy essentially tosses out gravity, impact, and context.

> That most of human progress over the past couple of centuries has resulted from companies doing what's most profitable, despite how non-intuitive that is.

Despite the over-simplicity of framing it as companies simply doing what is most profitable, this is, in fact, extremely intuitive, and has been studied, measured, and observed. I am curious what you find unintuitive about it?

> But I am saying that supposing there's some kind of obvious right ethical answer, and implying bad faith towards people at Facebook that they're somehow making decisions they genuinely believe to be wrong but making anyways, is not accurate.

This view may be true in a vacuum, but it is irrelevant. We live in American society, and there is an American ethical framework in which Facebook's actions can be viewed as unethical. Other countries that have this similar issue have their own ethical frameworks in which to deem Facebook's actions ethical/unethical.


> pluralism doesn't state there is no way to "find truth"

To the contrary, that is literally what pluralism as a philosophical concept says. You can read up on Isaiah Berlin's "value pluralism" [1], for example.

> These ethical arguments, however genuine they may be, are not equal however

On what basis? Again, the entire premise of pluralism provides no method for comparison.

> this is, in fact, extremely intuitive

Many would disagree. You might enjoy reading [2], which explains just how hard it is for citizens to understand it, from the point of view of an economics professor.

> and there is an American ethical framework

Except there isn't, that's the point. For example, Republicans and Democrats obviously believe in deeply divergent ethical frameworks. And there's far more diversity beyond that. Plus there's no way to say that any American ethical framework would even be right -- what if it were wrong and needed correction?

[1] https://en.wikipedia.org/wiki/Value_pluralism

[2] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=999680


I dunno: the last sentence of the abstract of [2] is:

> A better understanding of voter irrationality advises us to rely less on democracy and more on the market.

To my mind this immediately brings up the question of why people who are irrational voters would be expected to be rational economic actors.

- - - -

...Ah! I just looked at it again and saw the sub-heading: "Cato Institute Policy Analysis Series No. 594"

PLONK!


> For example, pluralism doesn't state there is no way to "find truth"

Well, there are lots of different ideas lumped together as “pluralism”, but most of them not only hold that there is no way to find truth on the issues to which they apply, but that there is no “truth” to be found.

> We live in American society,

Some of us do, some of us don't.

> and there is an American ethical framework in which Facebook's actions can be viewed as unethical.

Sure, but there are many, mutual contradictory and, often mutually hostile American ethical frameworks, so that’s true of virtually every actor’s actions, and virtually every alternative to those actions.


> American ethical framework in which Facebook's actions can be viewed as unethical

I'm curious what you mean by this, because I'd expect the American values of independence and free expression to be counter to wanting Facebook to actively supress divisive discourse. (Yes, I know the first amendment only applies to the government; the point is the spirit of the "American ethical framework")


The profit maximizing (shareholder value) argument is fairly recent.

At many other times, the concentration of wealth, and therefore power, was identified as a problem and actively mitigated. For example, the founding fathers of the USA were quite anti corporate and actions like the Boston Tea Party were explicitly so.


Nah. The founding fathers were the richest colonists and George Washington was the richest of them all. It was some rich people opposing richer people overseas that they were descended from.

They didn’t want concentration of political power but they had the economic power. Interestingly the political power endangers them because it has the power to take away their economic power. That’s the real battle still going on today.


How does one disprove the other?


Because it wasn’t concentration of power they were concerned with. They were only concerned with concentration of power against them (political power against their right to profit).

It was a selfish play not a principled one. For example, slavery was written into the constitution. How the hell does that happen when all men (and no women) were supposedly equal? Slavery was enshrined as an economic and then a political right (2/3 vote).

Not all of them were for slavery but that was the end result of the document/of the competing forces at play. It institutionalized slavery in the new nation.

Wikipedia

https://en.m.wikipedia.org/wiki/United_States_Declaration_of...

“According to those scholars who saw the root of Jefferson's thought in Locke's doctrine, Jefferson replaced "estate" with "the pursuit of happiness", although this does not mean that Jefferson meant the "pursuit of happiness" to refer primarily or exclusively to property.”

What has gradually happened is that personhood has been gradually extended to more and more entities (sometimes non human).


The colonists were ALL for maximizing economic power (pursuit of estate). They were ALL for limiting political power against economic power.

So this notion that colonists were against economic power is just wrong. Others may have held the notion but not as the colonists if you go by the Declaration of Independence and Constitution.


You have to go to other historical events to find evidence of that. French Revolution, Bolshevik Revolution.


And if that is the case, then you have people taking both sides of the argument over a long period of time.... Pro economic freedom vs limits to economic power.

It isn’t well recognized. It’s just a debate/fight people have been having for awhile.


It's this dynamic that some people want to treat each other as peers in some ways. This is because they are stronger as a group i.e. united we stand, individually we fall. However they to exclude others since if you include everyone, then there is no advantage (us vs them, the other).


Also the Boston Tea Party wasn’t anticorporate. It was against the tea tax to be paid to the government of England. “No taxation without representation”. It was anti government without representation.


> I've always wondered how such discussions go in company meetings where some product/feature has harmful effect of something/someone but is good for the business of the company.

I mean, it's one thing if we're talking about something like an airbag, where harm can result from normal usage because of a design flaw. It's another thing to talk about the Ford Pinto -- where harm could happen due to accidental misusage.

Does Facebook encourage division? Do ice cream ads encourage obesity? Or alcohol ads encourage drunk driving? (I get that Facebook's "engagement algorithms" are designed to maximize profit, and has a side effect of showing you things that are upsetting and frustrating... but that isn't their design. I'm no fan of "the algorithm", and don't think they should use it, but I think they should be free to.)

In this instance, I don't think it's fair to say Facebook has a "harmful effect". The abuse, misuse, and addiction to Facebook can be harmful, for sure... but that's not Facebook's fault. That's the end user's fault.

Should Facebook come with a warning label, like cigarettes? I don't think so. (I also don't think cigarettes should be mandated to come with images of people dying of lung cancer when alcohol can be sold without images of people with liver disease... but I digress.)

Everyone wants to "mitigate harm". But you need to be able to separate "harm due to malfunction", "harm due to accidents", and "harm due to abuse". This seems to be firmly in the third category, which is the least concrete and most "squishy" category.

Especially squishy, when "harm" is considered to be people saying and/or thinking the wrong things.


> In this instance, I don't think it's fair to say Facebook has a "harmful effect". The abuse, misuse, and addiction to Facebook can be harmful, for sure... but that's not Facebook's fault. That's the end user's fault.

Yeah, it wasn't me who posted this reply, it was the cells in my body. It's their fault... I think complex systems create effects that go beyond the individual parts. Facebook is running and profiting from such an 'effect' on society.

Their right to freely express their creativity by making the feed how they wish should be balanced with the large scale (negative) effects that appear in the system.


Facebook internal memo by Andrew Bosworth, VP June 18, 2016

The Ugly

We talk about the good and the bad of our work often. I want to talk about the ugly.

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.

Shortly after the leak Bosworth distanced himself from the post. https://www.theverge.com/2018/3/29/17178086/facebook-growth-...


I mean, he's not wrong. Facebook sucks because a lot of people are not-great human beings, and Facebook just allows you to see that. Oops. People might think that peer pressure would shame people into better behavior, but the concept of shame no longer exists in the post-modern world. Everyone feels justified in whatever they believe, and the Covid-19 situation on the platform couldn't be a more perfect example in illustrating the problem.

I say this from first-hand experience. I discovered that people I called friends were racist. I now consider those friends merely acquaintances, and I have since deleted my account. Better to just be ignorant of people's ignorance when I can't do anything about it.


I read that discussion as it was happening on the internal FB@work. Oh man, there were so many true believers replying about how this was so wise and inspiring. As far as I remember, no one questioned him. I wish I had posted that in a biological context, something that grows without bound or care for its environment is cancer. There is Boz arguing that Facebook is cancer.


Cancer is just a specialized case of evolution that in many instances is turbocharged by genetic instability...essentially the biological form of 'move fast and break things'. This results a very adaptive germline that handily outcompetes everything constrained by purpose while also overcoming novel threats thrown at it by the greatest medical minds of our time.

If it didn't kill people that we love we'd marvel at its capability.

Is Facebook a 'cancer'? I think it's more of a cultural radiological device that exposes the cancer that's already there.


Even that is very handwave-y. It talks about "connections" and events, but not that the algorithm (in the broad, commonly-used sense) encourages and incentivizes that which builds "engagement."


>Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing

Is this certain? The effects of useful features on growth are longer term and harder to measure than, for example, placing and styling friend suggestions in a way to confuse users into thinking they're friend requests.


This sounds like complete bullsh*it.

Where does he bring up the subject of Facebook connecting people to the level of addiction? With the only goal of maximizing screen time (and dopamine) to sell more ads? It's not "connecting people", it's "addicting people".

It is as if a 3rd world foodbank for Africa was bragging that they feed the world so well that 90% of Africa is now overweight, but that's good because they continue to "feed people".


> I cannot believe that everyone is ethically challenged

Right, so what assumptions are leading to the conclusion that this situation can only be caused by everyone being ethically challenged? Are ethics shared and absolute enough for the answer to this question to be easy or black & white? https://en.wikipedia.org/wiki/Moral_relativism

> Luckily I’ve never had to face such a dilemma

Are you certain about that? I realize you’re talking specifically about C-level execs debating something in a board room, but consider the ways that we all face lesser versions of the same dilemma. For example, do you ever consume and/or pay money for things that are generally harmful to society? Environmental concerns are easy to pick on since more or less everything we buy has negative environmental effects... ever bought a car? flown on an airplane? Smoked a cigarette or enjoyed a backyard fire pit? Bought anything unnecessarily wrapped in plastic? It’s really hard to make the less harmful choice, and a lot of people don’t care at all, so by and large as a society we put up with the harm in favor of convenience. As consumers, we are at least half of the equation that is leading to socially harmful products existing. If we didn’t consume it, the company meetings wouldn’t have anything to debate.


Boeing 737 MAX killed 346 people. So, it seems that death is not a deterrent.

The mails from the case are good to understand the internal discussions: https://www.theguardian.com/business/2020/jan/10/737-max-sca...


> "Boeing 737 MAX killed 346 people. So, it seems that death is not a deterrent."

I really don't understand your point, unless you're implying that there was a meeting where Boeing planned to kill those people. I am not an aviation expert, but what happened with the MAX seems to be a product of the certification process, urgent business needs, systems engineering issues, and bad internal communications at Boeing.

I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'.

As an aside, I have seen quotes about the 787, which were similar to those in your linked article (mostly with respect to production quality issues), yet the 787 has not had similar accidents. One problem with working on such huge projects is that the line engineers do not understand that managers are constantly hearing alarmist 'warnings' which don't pan out. If 1% of Boeing staff give false alarms in a year, that means there are 1600 false alarms.


> I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'.

People understand the consequences of what they say. I doubt that most people will say that statements out loud, even when they know that are true.

But, people knew and money was involved.

* February 2018

“I don’t know how to refer to the very very few of us on the program who are interested only in truth…”

“Would you put your family on a MAX simulator trained aircraft? I wouldn’t.”

“No.”

* August 2015

“I just Jedi mind tricked this fools. I should be given $1000 every time I take one of these calls. I save this company a sick amount of $$$$.”


I have read similar quotes about most modern aircraft development programs, yet aviation is quite safe. The fact you can find a few alarmists in a company of 160,000 is rather unsurprising.

Those quotes would be much more convincing if those employees put every prediction they ever made on the record, not just the ones that turned out to be sort-of right in hindsight.

From manager's perspective, you can't listen to everyone complaining about being rushed, understaffed, and underfunded (, because everyone looking to cover their butts in a bureaucracy does all three). On the other hand, you have to be on the lookout for credible issues.


You cannot have it both ways. First you claim that no one spoke up and then you dismiss the ones who did as alarmists


If someone does not make specific and testable predictions which turn out to be right, they are useless alarmists. If you want to read about how to assess predictors (and improve predictions), I suggest you read: https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_...


I'm a very good forecaster, and skill in this area doesn't stem from reflexive dismissal or ability to deploy fallacious counter-arguments.


Would you please point out the fallacy?


Bifurcation


I did not present a false choice between two options, I only defined what an alarmist is. I regard alarmists as an extreme on the spectrum of forecasters.

Bifurcating would have been saying that everyone is either a superforecaster or an alarmist, and I never said that.

You may not agree with me, but that doesn't mean that I fell into a logical fallacy.


If someone does not make specific and testable predictions which turn out to be right, they are useless alarmists.

This conversation doesn't seem like it's going to go anywhere productive.


It's more that there were several meetings where issues were raised that would kill people if they occurred, and those in charge decided the risk factors were minimal enough that they could execute on the plan.

Nobody planned to kill the astronauts on the Challenger. Such a systemic failure to anticipate and manage risk correctly is a team effort and heavily incentive-driven. Putting incentives in place that reward risk-taking increases the odds someone will die.


I think I have a very different understanding of the root cause of the o-ring failure on Challenger than you do.

The common understanding seems to be that the managers decided to launch when the booster temperature was cold (though not necessarily out of limits), and some were warning that it may cause some unforeseen issues.

My read is that each limit in the operations manual should have been backed by a test to failure, or at least a simulation of what would occur if the vehicle was operated outside the limits. Such a process allows the operators to clearly understand what can go wrong, and why the limits are set where they are. This is what they did on the SSMEs, but not on the boosters (because they thought the boosters were fairly simple).[0]

[0] https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-...


More concisely, if you won’t do it, then you will be replaced by someone who will.


Of course, no one planned it. But encouraging or demanding to take shortcuts is what caused it.

I have been in software industry for 15 years and this happens all the time, being forced to release unfinished features, asked to ignore security, backups, etc. I would imagine same thing happens in other industries.


My understanding of the MAX issues is that the issues were not really shortcuts, though they might look that way in hindsight (because every mistake looks that way in hindsight).

From my non-aviation perspective, it looks like they basically pieced together a bunch of complex systems, with each team making a number of (different) assumptions about each system. The systems themselves were influenced by FAA requirements to maintain the old certificate, which meant that certain desirable changes were impossible, so workarounds were devised. The problems were due to misunderstandings about how the systems would work when assembled, and these issues were not discovered and/or communicated. It really seems like a systems engineering problem, aggravated by a number of external influences (including business reasons and certification).


There is no FAA requirement to maintain the old certificate. Boeing and it’s customers wanted to do that for cost savings.

It is supposedly costly in time and money to acquire a new rating but it has been done obviously.

The airlines wanted a single pool of interchangeable pilots flying in name interchangeable planes (their existing 737s and the 737 MAX). Supposedly one of the airlines threatened to take new business to Airbus and had penalties written into the contract to make the 737 MAX fly under the existing certificate.

So it wasn’t the old certificate driving these issues, it was Boeing and it’s customers wanting to maintain the old certificate that drove the issues. That is a very large difference.


Perhaps my previous post was vague, but I meant 'FAA requirements [of commonality, required to] maintain the current certificate'.

The FAA may be in the right or in the wrong, but it has made certifying new designs almost prohibitively expensive and time-consuming; for evidence of this, simply look at the Cessna 172 (still in production on a 60-year old certificate), and what happened when Bombardier tried to put a new airliner into production.

You're definitely right that the airlines wanted interchangeable type ratings for crew, but the issue slightly more complicated than you're painting it.

I never argued the old certificate forced the issues, the certification system just strongly incentivized 'upgrading' the 737. This was one of many causes.


Wrong. Boeing engineers raised up concerns that were dismissed.

“Frankly right now all my internal warning bells are going off,” said the email. “And for the first time in my life, I’m sorry to say that I’m hesitant about putting my family on a Boeing airplane.” [1]

[1]https://www.cnbc.com/2019/10/30/boeing-engineer-raised-conce...


I didn't say that nobody raised concerns, I said:

>>"I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'."

In large projects like the MAX, there are always people raising concerns.


>In large projects like the MAX, there are always people raising concerns.

Does that mean that people raising concerns can be ignored, or does that mean that most large projects only get by with luck?


I think that's a really interesting question, but I think the answer is orthogonal to your dichotomy. In my experience, very successful projects depend on the great managers that know who to listen to in each different situation, and they know how people will react in each situation.

One of the best examples of this is Dave Lewis, who lead the design of the F-4 Phantom II, one of the most successful fighter aircraft of all time. He directed the structural design team to design for 80% of the required ultimate load, because he knew that everyone was conservative in their numbers; then the design was tested. The structure ended up lighter than comparable aircraft, and the Phantom II had phenomenal performance.

It also helps if the managers are good at making predictions of their own; Tetlock has written two great books about this, including: https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_...


> So, it seems that death is not a deterrent.

Well, the tobacco industry is still alive and well, and those companies literally peddle death.


They peddle a high risk product. So do companies that manufacture motorcycles and parachutes.


This comparison is flawed in several respects. The most obvious is that cigarette companies spent decades intentionally misleading the public about the dangers of their product. This is not the same as just selling a potentially dangerous product, especially one where the dangers are so viscerally obvious as with a parachute.


> parachutes

Parachutes kill people? I thought they do the opposite. Maybe firearms or alcohol make better examples.


If you use a parachute one time in case of emergency, yes, it is a life saving device that still has a high level of risk. However, I believe they were referring to the people that choose to parachute for sport/recreation rather than emergency situations.


The habitual use of any of the above will increase your chances of untimely death.


But in the case of parachutes, it's not the device, it's the activity. I know it's splitting hairs, but it's important, especially when it comes to assigning moral responsibility to manufacturers.


It's kind of a combination of all the above. Majority of employees are working for a paycheck and they don’t really care what goes on as long as they get paid. If the person is in an executive type role then their goal is to increase revenue so they convince themselves that it’s good for the company.


This seems very near the moral vacuity of the "just following orders" defense


That's perhaps the greatest power of the corporation: it allows people to do shitty things without any specific person being at fault.

Executives have a "duty" to increase "shareholder value". It's not that they necessarily wanted to do X, but their hands were tied because the "data" clearly showed that X was best for shareholders. Plus, if X was so bad, it's really the government's fault for not making it explicitly illegal.

Shareholders aren't individuals either, they're mostly mutual funds, pension funds, ETFs, etc... that makes algorithmic investment decisions. They didn't ask for X, but the funds they invested in will react to not getting X.


For the beta roles (because I can't help mapping wolf/pack behavior to most corp meetings anymore) about all a person can do is mount a weak defense. Which gets ignored by upper mgmt as they justify ASPD with a framework that says the number one priority is the corporate profit statement.

What percentage of people in these meetings are so wealthy they can risk everything over morally gray area decisions like this? Further how many can get away with it repeatedly should they choose to fight a battle like this?


Just so you know, the model of "alpha wolves" is considered simplistic and outdated in the study of actual wolves. Just one link: https://www.nationalgeographic.org/media/wolves-fact-and-fic....

I've found that when people use "wolf pack" (or "caveman times") explanations, what they're actually doing is using social models that (surprise!) reflect the culture that created them: humans in the twentieth century.


Few people in such a meeting are "risk[ing] everything".

I've quit jobs rather than doing sketchy things. For people in these industries, there's always a next job.


I don't think this is a question of someone doing "sketchy" things. Its a question of someone in the room questioning a morally questionable action, being implemented by a part of the organization as a whole. Quiting over it, or whatever likely doesn't even have an effect. Someone on the team required to implement it is going to follow the bosses orders. This appears to have happened a few times with members of the US president's cabinet over the past few years.

So, its more a "stay and fight" or "get rolled over and threaten/quit" decision. I'm betting most people just weigh the monthly mortgage payment against that and they raise the issue, but it doesn't get pushed beyond the discussion phase. If this goes on long enough, they switch jobs, or they become that person that just keeps their head down and do what they are told.


If you're just gonna keep doing it, you're not "staying and fighting" at all.

You don't have to be the just-following-orders guy, is what I'm saying. Somebody else might--that doesn't have to be you, and shouldn't be.


Part of this is a focus on short-term initiatives that are easy to measure and repeat. Boiling down billions of software decisions to a few KPIs seems short-sighted IMO but hey it makes money.


> I cannot believe that everyone is ethicality (sic) challenged...

Why not? Ockham's Razor says to accept the most parsimonious explanation, and I think that's it.

I mean look at little kids: they're amoral monsters. If they weren't so cute our species would have gone extinct ages ago.

Look at our methods to train ourselves to be better people: religions cause wars while "Wolf of Wallstreet" is a big hit. ($392 million worldwide.)

Look at our leaders.


The Wolf of Wallstreet was a scathing critique of capitalist excess. To think otherwise is to consider a lifestyle where your wife hates you and you crash your car on quaaludes because you've got nothing better going on glamorous.


I didn't see it. All I know about it comes from Christina McDowell's open letter:

https://www.laweekly.com/an-open-letter-to-the-makers-of-the...

> Your film is a reckless attempt at continuing to pretend that these sorts of schemes are entertaining, even as the country is reeling from yet another round of Wall Street scandals. We want to get lost in what? These phony financiers' fun sexcapades and coke binges? Come on, we know the truth. This kind of behavior brought America to its knees.

My point is that we did find it entertaining to the tune of $0.4B, and that doesn't bode well for our general level of moral development.


FWIW I just found this fascinating tangent: https://melmagazine.com/en-us/story/the-perfect-irony-that-t...

> THE PERFECT IRONY THAT ‘THE WOLF OF WALL STREET’ FILM WAS ALSO A REAL-LIFE SCAM

(caps in original)

> How Leo got caught up in a money-laundering scheme that screwed the Malaysian people out of billions.


You don't apply, don't get hired, or don't get promoted, depending on how effective their hiring processes are.


That leads me to another question: are there people/companies who will not hire someone who has been an employee of FB?


That seems generally unlikely to me, but it’s a big ol’ world out there so I am sure it has occurred someplace


I have heard someone spouting that but it was all baloney since that person supported actions just as bad.


Almost everyone is ethically challenged, we just need the right circumstances for particular expressions to emerge. The people who do right and wrong by you might be alternative persons under alternative scenarios.

The very poor and very rich are often placed in front of ethically interesting bargains, such as a trade of life for money, whereas Hacker News has trouble even daring to ballpark the dollar value to life -- a middle class aesthetic where one has neither the resources nor the desperation to trade in flesh.


People tend to rationalize it as not that ethically challenging or by compensating through some other societal benefit.

I knew someone who ran a FB group that devolved into conspiracy theories and absurd levels of anger to the point that members of the group were lashing out at local politicians.

The group owner liked the power and influence so rationalized it as "increasing public engagement in politics." This person is otherwise a vegetarian who fosters animals and works in the medical field.


> I cannot believe that everyone is ethicality challenged

The difficult ethical discussion probably never happens. The decisions being made in those meetings are usually seen as small/inconsequential. The problems caused by those "small" decisions are ignored. Eventually those problems become normalized allowing another "small" decision to be made. Humans seem to be very bad at recognizing how a set of "small" decisions eventually add up to major - sometimes shocking[1] - consequences that nobody would have approved if asked directly. Most of the time, nobody realizes just how deviant their situation had become.

For a good explanation of the mechanism underlying the normalization of deviance (as an abstract model), I strongly recommend this[2] short talk by Richard Cook.

[1] https://blog.aopa.org/aopa/2015/12/07/the-normalization-of-d...

[2] https://www.youtube.com/watch?v=PGLYEDpNu60 ("Resilience In Complex Adaptive Systems")


I've been in that situation. I argued as much as I felt I could get away with and made the strongest arguments I could against unethical behavior. I was eventually forced out. A couple years later, the company was investigated by law enforcement and subsequently declared bankruptcy.

The people in control were the only ones pushing for the unethical actions, but most others were a lot more quiet than I was and several stuck around until the bitter end.


People are capable of all sorts of mental gymnastics to keep things at arm's length. Bad practice X is because of group Y or requirement Z.


The discussions in this article are never shared with employees, it is just a matter raised in closed high level board meetings. Companies never discuss openly negative positions, and if they do it is only to dismiss them.


> I cannot believe that everyone is ethicality challenged, only perhaps the people in control.

Seems likely that social media as an industry selects more strongly for unethical executives, presumably because online advertising is the only effective way to monetize social media and it is more or less fundamentally unethical. I imagine the same effect can be observed among tobacco and fossil energy executives--these are industries where there is no ethical monetization strategy, at least not one that is in the same competitive ballpark as the unethical strategy.


Online advertising as a concept is fundamentally unethical? I think you're speaking in hyperbole here. Stealing user data without consent (or with fake here read this 500 pg legalise consent) is unethical for certain.

But a bike blog putting ads for bike saddles on the bottom of their page to pay for their server costs and writing staff? Hard to see how that's unethical unless you think selling anything is unethical.


> Online advertising as a concept is fundamentally unethical?

No, I meant "online advertising as an industry". It's unethical to the extent that it depends on stealing user data, which presumably is the overwhelming majority of the industry by value (i.e., I'm assuming your privacy-respecting bike saddles ads don't account for even 1% of the industry's value).


>Seems likely that social media as an industry selects more strongly for unethical executives

More so than the fossil fuel industry? Big tobacco? Or the pharmaceutical industry? Wallstreet? Clothing/apparel manufacturers?


> More so than the fossil fuel industry? Big tobacco?

I already addressed this in my second sentence:

> I imagine the same effect can be observed among tobacco and fossil energy executives

No, that wasn't meant to be an exhaustive list of unethical industries.


I have an example. We built a feature that would be good for users. However we found out that it would result in lost revenue. The decision of whether to keep the feature got bounced up management. Eventually we were told to can the feature and that the decision was made at the very top. Keeping it would have affected quarterly revenues. So no go.

That showed me what kind of company it was. The decision went directly against one of the company’s supposed core values. This was not a small company. Don’t work there anymore.


Nobody thinks they are complicit but in reality we all are. Some can accept this while others let the cognitive dissonance drive their behavior in convoluted and hard to discern ways. Redemption only comes after accepting that we’re born of original sin. Anybody who supports or uses non-free software has worked to finance the amoral tech decision making that you’re decrying. Even Stallman makes compromises. Welcome to modernity.


I think it's mostly denialism (which is cultivated by management). This is a great article about it: https://newrepublic.com/article/155212/worked-capital-one-fi...


Not at boardroom level, but I was in a couple meetings in past jobs where this happened.

In one case, people had different ideas of what's more ethical/user friendly, since we can't resolve those disagreements with more arguing, we go with metrics, and metrics have no morality.

In another case, everyone agreed that it was slightly shady, but it was a highly competitive market and we have to do it to stay alive.

On the bright side, if a company ventures too deep into bad practices, it will eventually lose trust of the public. Which is why the capitalistic world hasn't descended into complete madness portrayed in dystopian sci/fi films.


> how such discussions go

In my case, I told my manager about a system design problem that would cause a daily annoyance to 100k people, forcing them to input their passwords more often than necessary. He said, "they'll accept it." I said, "I quit."


Is it clear that echo chambers and polarized discussion are good for the bottom line? I imagine they help with user growth and user retention, but would people engaged in these polarized echo chambers actually spend more on advertised products?


I think what happened here is a little different than how you describe. For me, it seems they had a hypothesis, found support for their hypothesis, then changed its definition for speculative motivations with tangible harm.


What kind of harm do you propose is the kind that should have pushback?

Movie executives discuss (do they even?) the ramifications of their movies which glorify ills? Do they censor violence, suicide, etc?


> What kind of harm do you propose is the kind that should have pushback?

"Some 700,000 members of the Rohingya community had recently fled the country amid a military crackdown and ethnic violence. In March, a United Nations investigator said Facebook was used to incite violence and hatred against the Muslim minority group. The platform, she said, had “turned into a beast.”" https://www.reuters.com/investigates/special-report/myanmar-...


So why facebook but not movies and TV over the air or streamed via other platforms? What, because it comes from studios and other sanctioned organs? Are they above propaganda and above having agendas?

I’m not saying FB is not culpable, but I’m saying if they are, then so are others.


> above having agendas?

Having an agenda is normal and is good. Everybody that plans for the future has an agenda. What is wrong is to have a "hidden agenda".

A "hidden agenda" is wrong because is a form of manipulation. When an organization has a "hidden agenda" means that they are lying to achieve a goal that they are hiding.

If a movie agenda is to "create awareness of human trafficking", and it shows how "human trafficking" impacts peoples lives, that is not "hidden" and it is actually an agenda that most people supports.

So, to have an agenda is intelligent, needed, common, awesome behavior. Stones have no-agenda, rocks have no agenda. To have a "hidden agenda" is what should be criticized.

Why will anyone think that to have an agenda is bad?


So, I work in healthcare - as a doc, and at various times, as an admin in healthcare centers as well as in health insurance. I don't know how much of that experience relates to FB's behavior, but I have some idea of what it's like to work in a field and be either called a hero or a devil, depending on the day. I am neither.

Deep breath.

As an industry, we are often doing things that are perceived to be evil. I've noticed the following:

1. Some of that interpretation is just wrong. People from the outside tend to have a poor understanding of what we do (providers, centers, insurers) and draw conclusions based on highly imperfect information. This is compounded by the fact that journalists have a terrible comprehension of what we do and an incentive to dramatize and oversimplify it - resulting in people reading the news and walking away misinformed and wrongly feeling like they're now educated on the topic. This happens a lot.

2. We sometimes do things, or want to do things, that have potential harms and potential benefits - e.g., in health insurance, I'd love to have had the ability to twist people's arms into coming to get a flu shot. It would have been a huge net benefit to their health. It would have been a net reduction in our costs. It would have been great! If we'd had the ability to ignore patient autonomy and force it, or carrot-and-stick it, we probably would have. We would not have conceptualized it as "ignoring patient preference," we would have conceptualized it as "preventing a bunch of preventable hospitalizations and deaths and, for the elderly, permanent consequences of hospitalizations." And that would have been true! And would have allowed us to not think about the trade-off so much. It's not lying to yourself: it's looking at the grey, round-edged parts of a cost-benefit analysis and subjectively leaning it in your direction. My motivation there isn't even about the money - the money just gets it on the radar as something my employer would be willing to prioritize.

3. Resource scarcity. I only have so many resources to allocate. One may benefit a patient X; another may benefit them 10X. If X benefits my organization and the 10x choice doesn't, I'll probably choose X. By itself I'm not choosing to do harm - I'm choosing a win/win. Enough decisions like that, in enough contexts, probably do give rise to net harm. But the choice isn't to do harm.

4. Not every battle can be a "will I burn my career over this?" battle. If I'd ever been faced with a choice that I thought was harm > benefit to patients, I would have burnt the house down over it. But I haven't. I've been faced with lots of little grey questions with uncertain costs and uncertain benefits where there was, in fact, benefit, and usually not just to us but to the patients too. I imagine that's where most organizations go awry: a thousand decisions like this, shaking out under the pervasive organizational need for profit. Like a million million particles of sand moved by the tide, settling out into an overall pattern due to gravity. I think the badness is generally an emergent pattern, not a single person choosing to do evil, or choosing themselves over causing harm to many. I've never been in that position, ever, so either my career is highly anomalous, or that's just not how those choices present themselves in real life. I suspect it's the latter. (Or, I guess, my being amoral is a valid third possibility.)


Capitalist systems sieve out people whose goals are at odds with the accumulation of capital. By the time you get to a boardroom, everyone has been tested hundreds of times for their loyalty to profit. All deviations are unstable: over a long enough period of time they will be replaced or outcompeted.


Not sure why this comment is being downvoted. The people who rise through the ranks are exactly the kind unburdened by ethical or moral issues that get in the way of the business generating revenue. In fact, such folks use their short term gains from breaking such implicit expectations to jettison themselves ahead of their peers. As such, this kind of behavior is incentivized.

Those with such issues either quit or work in non controversial parts of the org.


Agreed. And it works between companies as well as between people within companies. The system is set up so that only those who push the boundaries and exploit externalities can compete.


When you describe capitalist processes, some people take it as if you are making a morally charged argument.


I think there is a crowd that kneejerk downvotes ideas they interpret as anti-capitalist, without reading the argument.

An example: I am not a Marxist. But I think the Marxist question of "surplus value" as an ethical question is relevant and interesting. I pointed it out on HN a few times. Again, without being a Marxist, just intellectually curious. Nobody ever asks me if I am really a Marxist. I get downvoted pretty severely when I point it out. I get an impression that they smell a whiff of the opposing sports team and turn negative.


Ugh, such strong language. Please censor it as M*rx or, better yet, "literally Satan".


I don’t ask people if hey are really Marxists because when I have in the past I get accused of “pigeonholing” their idea(s)


*they


There are very many lucky people who are now fierce libertarians on HN these days.


Yep, CEO's are sociapaths about 4-10x the normal rate [1] for this reason

[1] https://www.forbes.com/sites/jackmccullough/2019/12/09/the-p...


By an arbitrary definition of sociopath invented by a researcher that has little to do with the commonly-accepted definition of sociopath, who used his broadened definition to build his career on the pillar of running around making surprising declarations about "sociopaths."

I'm really, really tired of hearing about the "sociopath CEO" numbers. They're not real.


I've been there at Google a few times and can imagine exactly how this went :/. The one time I can tell about is the blogger disaster [1]. The top leadership, spearheaded by chief of legal was basically ignoring everyone's logical arguments at the meetings, the town halls, etc. We kept coming to the mic and telling them that their ideas of what is and isn't sexual are arbitrary, as are anyone else's. They said "no, we have experts and we have a clear definition" (they didn't). We explained that post-facto removing content people wrote is cruel and unnecessary. They claimed "nobody would care or miss it". (of course they would). We told them that this will hurt transgender people, who used to find support in blogs of others going through the same life challenges and blogging about it. Those blogs would be banned under the policy. They said they had data that impact would be minimal. (They had no data). Normal rank-and-file people at google all knew the idea was a bad one. We fought hard. They scheduled a 8-am townhall and announced it the day before at 9pm! We showed up anyways en masse! There was a line to the microphone!

They had microphones in the audience. I walked up and directly asked for the "data" they claimed to have showing no impact will be had. They claimed and I quote "we have no hardcore data" (audience was laughing at the word choice given the topic). I said that "well, then how can you claim to be making a data-driven decision?" Drummond answered that "we know this is right and we are sure." The town hall was a waste of time. Nothing we said was heard and all they did was recite lines at us from the stage that made it look like either they did not understand what we had to say, or they were trying very hard to appear to not understand. Both sides were talking, but nothing we said seemed to change their mind, They came there to deliver a policy, not to collect feedback on it, despite claiming this was a meeting to discuss it. That was clear.

We did not give up. Google's TGIF was the next day. A number of people came there early and lined up and the microphones, ready to bring this up again and again. In front of the whole company and the CEO as well (Larry and Sergey were not at the town hall and claimed to have not heard of the policy until "the ruckus started").

I guess they saw the large line of people and relented. Before the scheduled TGIF began they announced they will reverse the policy.

This was a rare victory, for this sort of a situation. I am willing to bet that there are lots of good people at facebook who also fought as hard or harder against this. They just probably lost. Having seen how this plays out internally, I am not surprised, just sad.

To anyone at FB who fought against this, I send you my thanks!

[1] https://techcrunch.com/2015/02/23/google-bans-sexually-expli...


They quit. The process selects for the most sociopathic because the fitness function is heavily weighted towards bringing profits in the short term. Ethics are only a consideration to the extent that they affect public perception (hence profits) or safeguard against litigation ( protecting profits).


The Banality of Evil.


Where do you draw the line?

If the customers are willing to pay a huge markup on a product, who are you to tell them wiser?

- youdontchargeenough11 (probably)


This is more than just a price markup..

This is more like a pharma company or Monsanto knowing that their product kills, but ignore or hide the data and keep selling the product.


Division on FB started out as squabbles between friends and relatives.

And yet, here we are.


I'm ethicality challenged if I think the biggest (or at least up there) forum of public discourse shouldn't be micromanaged like a day care, with "divisive" people sent to time out? Is it unthinkable to you that some people value free expression over being protected from negativity?


I've typically found my employment via companies who deal with a variety of contracts, some of them for weapons or defense contractors.

I could go down the rabbit hole of chasing down all those contracts and would probably find that many of the products my company makes get sold to groups and causes that I don't support. But in the end; I've gotta eat.

Do I want to throw away my career which is 99% unrelated to the SJW cause I support just because 5% of our products eventually get used against that cause. What about the 95% of our products which go to worthy causes?

I'll say it again... I just gotta eat, man. What's good for the gander is probably good for the goose too.


Are all your employment options equally in the moral grey area? Or did you just not want to think about it?

Look, do what you want, it's your life. I spent a decade working in defense and now I don't. Some times were uncomfortable. I hope you keep your eyes open when making decisions to avoid some of the discomfort I've felt in the work I've done.


If you're working for defense or weapons contracts you're supporting the industry that has kept us in the Middle East for almost two decades.

I agree that it's often a moral grey zone but in this case it's pretty clear. If you're an engineer there's plenty of other companies to choose from.


While that is true, I've worked in manufacturing environments with high tech equipment. This manufacturing equipment is so sensitive it gets covered with tarp during dog-and-pony shows. We are using equipment and techniques in the USA that other nations could only dream of implementing. Why do you think most airplane manufacturers are located in the USA? Don't you think an airline would buy aircraft engines from China if they could?

Keeping America on the forefront of technology has its benefits. If we don't invest in cornering these technologies; our adversaries will.

Unfortunately it's the same technology that has kept us in the middle east that's also been a forceful deterrent which safeguards all Americans.


Products that may be sold to terrorists include canned beans and Toyota trucks. Your situation might actually be less morally compromising than the Facebook stuff being discussed, because in their case they are the "questionably motivated 'freedom fighters,'" (i.e. they're directly doing the morally questionable stuff) whereas you're just selling stuff to a broad market that may include questionably motivated "freedom fighters." It's sort of the difference between selling lockpicks that may eventually be used in a burglary or might also be used to get Grandma's safe open, versus breaking in yourself.


Forgive me for the bluntness, but nobody with any set of technical skills "gotta eat" by supporting those kinds of efforts. I've worked to practice what I preach, too; I've consistently worked in do-no-harm jobs. I make rowing machines today and the worst you can hang on me from the past is that I had a daily-fantasy-sports site for a client for a while (which I'm not proud of but it's a pretty venal sin)--and I have made more than enough money to do very well for myself.


Makes me think of a quote from an old West German anti-napalm film "Nicht Löschbares Feuer" (The Inextinguishable Fire) [1].

"The students of the Harvard University write that I should leave the criminal Dow Chemical Company.

I'm a chemist. What should I do?

If I develop a substance, someone can come and make something out of it. It could be good for humanity, or it could be bad for humanity.

Besides napalm, Dow Chemical manufactures 800 other products.

The insecticides that we manufacture help mankind.

The herbicides that we manufacture scorch this harvest and cause him harm."

[1] https://vimeo.com/107990231


Are you actively looking for employment elsewhere so that you can transition away from supporting harmful causes? Or are you using the excuse that you have to eat as a reason not to do hard things in your life?

I have used that excuse myself. I'm trying to get better at not using it.


As actors in the World, we are machines that turns sensor data into a linear stream of actions. To the extent the decision process in not completely random, there exists a metric that ends up maximized by the decision process, sometimes referred to as 'god' or even 'God'. The vast majority of economic decision processes in the modern economy are driven by one metric: money, sometimes referred to as 'Mammon'. A corporation is an aggregation of human / computerized actors that work to maximize the corporation metric: money earned by said corporation.

The discussions are very simple: Course of action A makes us X$, course of action B makes us XXX$. Therefore course of action B is taken. There is no consideration of other effects besides, perhaps, a quantization of risks. Risk of losing the 'good guys' facade, counterbalanced by PR expenses, or risk of being sued, counterbalanced by legal expenses.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: