Hacker News new | past | comments | ask | show | jobs | submit login

I'm having a hard time reconciling all this right now. On the one hand, from the outside, I can see the actions that Facebook takes and they seem awfully guilty of what they are accused of. But on the other hand, I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.

The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening. It's so big no one can comprehend the big picture of it all, so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.

So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.

I can't reconcile it any other way.




The person who mentions the banality of evil, dannykwells, has an excellent point.

But there's more at play here. I briefly worked on Twitter's anti-abuse engineering. Many of the people on that team cared a lot about protecting people. I sure did. But we didn't have the necessary power to actually solve the problem.

The people who did have that power were senior execs. They might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.

My boss's boss, for example, was an engineering leader who had a climber's resume: quickly advancing through positions of more and more power. In my view, he cared about that a great deal, and did not give a shit about the actual harm to users. As soon as he got the chance, he pushed out my boss, laid off the team's managers, me included, and scattered the people to the wind.

I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.


I think this hits the nail on the head. It's not that Facebook or the many people who work there don't care about kids or a deleterious political climate. They do care. It's just about what happens when those concerns conflict with other concerns, such as maximizing user engagement. In my opinion Haugen's testimony and Zuckerberg's response simply confirm this: Haugen talks a lot about the research that was done and how that research was ignored; Zuckerberg points out a lot of (somewhat lacking in context) facts about the size of Facebook's investments in trust and integrity or openness to regulation.


Fair to say it's about incentives? I think it's possible to run a profitable social media company that does care. We're just stuck with too few options. I'm unaware of any that make transparency a core value by, say, openly publishing moderator actions.


I think it's possible to run a profitable social media company that does care.

That's probably true but not at Facebook's scale. If the employees are willing to accept the argument "We could have increased profit but chose not to in the interests of users.", because everyone's bonuses and stock value is based on profit, you'd need to hire a lot of people who are happy to trade some amount of personal gain for user wellbeing. That would be difficult when you have tens of thousands of people.

Alternatively, maybe it'd be possible to run Facebook without giving the staff equity or bonuses. That would take a huge shift in the way tech renumeration works though.


This comment consider employees (people) as if they were algorithms to maximize personal revenue.

That is not necessarily true. If FB where a place where people could feel proud of the good it does, a lot would be ok taking a pay cut.

Sure, maybe the ones that care the most about compensation wouldn't join, but that might be ok.

I worked previously for a for-profit company which was devolving a massive amount of their profits to charity. It was something I was proud of, and a part of my evaluation of how much I liked working there.


It seems increasingly likely that if FB keeps ignoring these concerns they're going to get hit with some sort of government regulation. So I still think it's in their long-term self-interest to try and avoid that happening, even if we assume profit is really all they care about.


if they can do this, let everyone trust they do this, rheir company will beyond all organizations in human history. you can let them take care your kids!


>I think it's possible to run a profitable social media company that does care.

No. Due to network effect successful company must be large, and the large company lives by different rules. FB for example is a trillion dollar company. It is like a TeV energy level in physics. Different behavior of matter. At those energy levels chemistry for example just doesn't exist, and matter itself changes as protons break apart. Ethics in big business is like chemistry in high energy physics - it just doesn't exist at those levels of money. At these levels they can be affected only by comparable level of money or power, like the government power.


Then they are too big and must be broken up.


I can't see it working. Instagram and WhatsApp should be spun off, absolutely no question about it. But Instagram and Facebook still remain way too big.

And how do you break them up? By geographic area? People would be pissed because they have contacts with, or maybe want to watch vacation videos from people from different areas.

IMHO the only options would be:

* non-profit run by the UN or similar

* forced to open all data and APIs to make it a platform, with rules to make it an even playing field

In both cases with much less, and why not zero, "engagement" focus ( just show the latest stuff chronologically from the things you've liked/subscribed to). Could either work? No idea, but seems to me that they have better chance than a Facebook per country or region.


It's probably too late for any specific publicly traded company that already exists, but social media as a protocol, with forced interoperability if necessary, is the way to solve this. Many regional phone carriers doesn't prevent people from communicating across regions.

Of course, it does cost more, so the question becomes who is willing and able to pay for a socially less harmful means of networking individuals without having to put them all on a single data hoovering ad platform? Whether it's direct charge to consumers or subsidized by government, the money still ultimately has to come from people.


Thanks for the Great Content Sir. I will share with my friends & once again thanks a lot. https://beingkurt.com/


> social media as a protocol, with forced interoperability if necessary, is the way to solve this

Hey, if they get that implemented for Facebook, maybe they can try to make it work with health care records too. Good luck with that...


here's a business model for it:

Implement ActivityPub (an existing protocol recommended by the W3C), and offer their underlying social networking services as hosted and managed software for big orgs to operate on their own domain. With interoperability, they'll work across domains. Target customer is anyone with at least 100K followers (so gov, institutions, media, et cetera).


If the United Nations ran Instagram, they would ban any discussion of human rights in Hong Kong, because China is a veto member.


And the US would ban any discussion on Guantanamo and CIA torture. What's your point?

It should be run independently, like the WHO, not directly under the power of the security council. And before you say Taiwan, the Republic of China is not a UN member, and there's nothing the UN or WHO can do about it, it's between China and them.


I am unfamiliar with cases where the US prevented CNN (or others) from discussing Guantanamo.

I am familiar with the extreme lengths China would go through to prevent discussion on topics it hates though (e.g. if HN was based in China both of our comments would have been deleted immediately)


> And how do you break them up? By geographic area?

Did you hear about federation and inter-compatibility between different network? Have a look at how Mastodon works.


And who is going to break up TikTok or whatever comes next/instead?

The social media is like tobacco, and FB increasing the engagement at all costs is like when Big Tobacco were increasing addictiveness of the cigarettes. Like with tobacco, the way to deal with the issue is to wean the population off by in particular educating the population about the damage it does to them .

Btw, "Statement from Mark Zuckerberg" - that reminded that foundational tenet of Facebook :

Zuck: yea so if you ever need info about anyone at harvard

Zuck: just ask

Zuck: i have over 4000 emails, pictures, addresses, sns

Friend: what!? how’d you manage that one?

Zuck: people just submitted it

Zuck: i don’t know why

Zuck: they “trust me”

Zuck: dumb fucks


To be fair to the guy, he was 19 then. Who amongst us hasn't said something (many things, in my case at least) stupid at 19?


Also, he was right. People shouldn't have sent Zuck their private info even if Zuck were a saint, because there was no way they could have known Zuckerberg was a saint.


how many stupid things you said at 19 made you $100B? I'm pretty sure that if a stupid thing you said at 19 had made you a $100B by the age of 30 and continued to made many billions after that you'd pretty much continue to believe and follow that stupid thing.


This wasn't about Facebook, it was about a different, non-commercial project (as far as I know, anyway).


That is a beautiful simile, thank you!


I honestly don't know if it's possible. Common sense seems to suggest that social networks become more harmful as they becomes bigger and more profitable. That's just my surface-level take though. I guess the meta-question is what to do about highly-profitable lines of business that cause demonstrable social harm. Clearly regulation and government intervention are one answer, as are reforms to corporate governance structures (e.g. German and Nordic rules around worker representation on corporate boards.)


> Common sense seems to suggest that social networks become more harmful as they becomes bigger and more profitable.

But FB isn’t a social network. It’s an engagement farm and advertising platform that abuses a social network to drive its business aims.


While I'm sympathetic to your anti-FB bias, I think it's pretty clear that by any reasonable definition Facebook is indeed a social network (which also happens to make gobs of money through advertising, and covets engagement because that increases ad revenue as well as a general sense of "platform health"). The question posed was whether "being bad for society" is an emergent property of social networks as they grow.


I guess you’re still on FB?

It’s quite hard to see how it’s just a business that doesn’t give one shit about you no matter now many followers you have or how many likes you got when you’re still using it.

An actual social network would care how you feel today.


A social network is just a kind of business, like a chain of supermarkets or a steel mill. You could, of course, have a non-profit social network (or a business with a different corporate structure - as I allude to in a parent post), but that's not what Facebook is or what its peers are. In the United States, businesses are beholden first and foremost to their shareholders, which means that there's an inherent tension when "giving a shit about you" and "making money" come into conflict. What makes you think that a social network with a normal corporate structure and in the absence of countervailing regulation would have more empathy than any other business?

As an aside, I'd like to make the broader point that boiling everything down to "Facebook=evil" doesn't seem like a productive way to get the changes that I think both of us would like to see. It walks right into the strawman arguments that Zuckerberg is responding to in his post ("Why would we invest so much in trust and research if we didn't care?"). And it doesn't capture the fact that Facebook is a huge entity comprised of a lot of people, many of whom have different incentives and some of whom are even trying to do the right thing (see: Anna Haugen).


You’re trying to appeal to “my better nature”, to help me believe that FB can be changed. It wont work.

In 2007 or 2008 I promoted the idea amongst my friends to “poison the well”, to feed bad data into FB’s algorithms. If they enjoy Coke, talk about Pepsi, I said. If they vote Green, share far right news. I called it Falsebook.

It didn’t work, because the problems inherent in the advertising auctions are not very visible to users. They mesh with the echo chambers of our friend circles and fizzle into the background. We seek out echo chambers in order to feel safe and validated. It’s mostly fine when it’s just humans relaxing with friends. But when that echo is robotically generated from an accurate model of all individuals involved, it can be leveraged for all sorts of big scale shady crap. Advertising is the village idiot of this town and corportate-backed political propaganda is the warring gang lord.

And Zuck does what any market owner does.. sits back and rakes in profit, cleaning up the mess when it suits him. And mostly it doesn’t.

I’ve never felt it could be fixed from the inside. I’ve never thought it was a good idea to begin with but I let peer pressure and my magpie nature suck me in. I regret signing up for FB and GMail back in the day because corporate surveillance has fucked our society hard and let sociopaths run rampant.

You will not change my mind. I wasn’t contributing to the conversation in good faith. I shall assume you were. My original glib comment about FB abusing social networks wasn’t an invitation to learn a new perspective, it was a war cry, a flag raise, a call for comrades. I was hoping someone would respond with links to a new Scuttlebutt implementation or tell me about a cool Masto instance. Or try to explain why I should bother with Matrix. Or something more interesting than all of those.

The web is sick. Personalised advertising has made it ill. We need to fix it and I don’t believe FB or GOOG are interested in trying.


An actual social network would let you choose how you feel, which is why you can choose your friends who might all be downers and feel horrible, and the social network will let them share their despair with you.


> Common sense seems to suggest that social networks become more harmful as they becomes bigger and more profitable.

Yeah I don't know that anyone could have predicted these networks would become too massive to displace. Intervention should be on the table, no doubt about it. I hope they bring up net neutrality and Facebook's violation of it via internet.org. I thought "Free Basics" had died but apparently it's still going.

https://en.wikipedia.org/wiki/Internet.org


> It's just about what happens when those concerns conflict with other concerns, such as maximizing user engagement.

They care about their own kids more than they care about abstract faceless millions of other kids on FB. Moral bankruptcy starts when they harm other kids in order to put food on a table for their own children. I would not be surprised if some of the FB's employees ban their kids from using it.


I don't necessarily subscribe to the Gervais Principle[1] other than thinking it's an interesting lens through which to reexamine motives and motivations of coworkers, but sometimes the terminology is damn apt (at least for one group...).

1: https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...


Reminds me a bit of one of my favorite books to chuckle over: systemantics


Bingo. I never interacted with a person on the FB integrity teams who didn't care deeply about these problems - but their solutions never seemed to make it into production. Whether that was because of the unintentional friction of bureaucracy, or the explicit wishes of execs, is somewhat immaterial in the final analysis.


Were you ever really meant to solve the problem or just be a Potemkin village in case anyone accuses the execs of not caring.


Meant to by whom? I was very serious about it. So was my boss and most of the people on my team. Beyond that, I don't have direct data.

My guess is that execs would have been very happy if we could have quickly solved the problem in a way where revenue and growth were not harmed and nobody important had to go out of their way.

But online abuse isn't like that. It's a hard problem. So I think execs were satisfied to say they were making a big effort, celebrate some modest gains, and then stop thinking about the problem once it wasn't a giant PR/regulatory issue for them.

So it's more like how a lot of people mean to get fit or lose weight. If it's New Year's Day or their doctor scares them enough, they'll get real serious for a while. They probably do mean it, but they mean a lot of other things too, and those win out.


I think you're exactly describing a "Potemkin village in case anyone accuses the execs of not caring." The people in power weren't serious about solving the problem (because the only things they'd accept were rearranging the deck chairs on the Titanic or an easy magic solution that could never actually exist), and the main benefit your team provided was PR/regulatory cover for the organization.


I would posit that no company "actually" cares. The premise of twitter isn't "the social platform that has no abuse". If that were the main goal, i'd imagined they'd be another nothing startup that ran out of money and died years ago. So then if only companies that don't have social good as their primary goals are the ones that would ever exist in the first place, then it feels that trying to judge companies for doing so seems not particularly useful.


> So then if only companies that don't have social good as their primary goals are the ones that would ever exist in the first place

Or alternatively, we could try to view this as the root problem and try to fix it.

Edit: Note there is also a difference between "not having social good as their primary goal" and working effectively against the social good, whether intentional or not.

Edit2: Explanation of the downvote would be nice.


no, a Potemkin village is never meant to be real, the parent commenter suggested it was like New Year's resolutions that are meant to be real but in the end people fail because they like their New York cheesecake too much to change.

Then the doctor tells them again you need to go on a diet or you will have a heart attack, and they go on the diet for a couple months.

having a heart attack may solve the issue.

Of course there are clear eyed people who see the situation for what it is, those that stay on don't care and consider the efforts to fight abuse as a Potemkin village.


Exactly. To me a Potemkin village is one or two steps further away from reality. The Potemkin village is unoccupied and has no potential for occupation. All involved in its construction know it's fake.

My team was sincere, worked hard, and definitely got some good stuff done. Just not nearly as much as we wanted.


In this metaphor you're the buildings, not the builders.


Same thing then. In the metaphor, the buildings' only purpose and only outcome is to fool. One of our purposes was to get things done. And we definitely had some impact. If the public/regulatory pressure stayed constant, we would have gotten some more done. We would also have gotten more done if executives had taken it more seriously, of course.


So here's a much more general statement: your identity and your sense of self, your consciousness, your "choices" that build your life's narrative, are all actually a "Potemkin village in case anyone accuses the execs of not caring". The "anyone" in this scenario being other social agents you interact with (see [1] for further thoughts on this).

> the main benefit your team provided was PR/regulatory cover for the organization

This sort of reasoning seems to be applicable on many levels of social organization, from brains to countries. Most of the stuff your brain does is for show / self-delusion, most of the stuff any community or global organization does is also for show / self-delusion. It's "Potemkin villages" all the way down.

[1] https://www.elephantinthebrain.com/


Thanks for the Great Content Sir. I will share with my friends & once again thanks a lot. https://beingkurt.com/


This is a place where the founder-CEO-demigod like Zuck should able to make better decisions than a professional team like Twitter. The long term profit maximization strategy was to maximize profits only up to the point where you risk getting regulated by government. With all the fawning praise of him as a kid I don’t think Zuck envisioned that one day both Democrats and Republicans would be united in their desire to fuck him over.


Zuck is desperate to be regulated: he asks for Congress to step in every time he’s asked.

Regulation would be awesome for Facebook: not only would it be a fig leaf for all their social problems (“hey, it’s not our problem anymore”), it would also stifle any potential competitors out of their market. The costs of regulation are regressive: much more easily absorbed by BigCos than any startup.


Could be. Some of the proposed regulation I've seen for this specifically exempts companies under, say, $100m/year in annual revenues. So legislators aren't unaware of the problem.

Another possibility is that Facebook knows that asking Congress to do something is either a) not going to increase the odds of them doing anything, b) actually decrease the odds by sounding contrite, or c) puts it in a place where Facebook's army of lobbyists and otherwise connected individuals make sure nothing meaningful will get passed into law.

It's not a bad bet given how polarized Congress and the American electorate are. And gosh, who is a big enabler of that polarization?


The base problem isn’t really solveable, and its as much of a public discussion on what we want to do with speech first, before its a question of how we want Social Media firms to act.

In the end, there is no algorithm which can match the scale of bad content, no robust definition of bad content which can work without creating a flood of false positives.

Every false positive is now someone who had something valid to say who is silenced.

How are we going to decide which grey area speech is unwelcome (leaving out obvious things that are illegal).

———— The popular idea is increased human centric moderation, but thats still going to be 2k email escalations for one region per day, at a 10% escalation ratio from a base of 20k.


It only appears unsolvable because you've presumed that social media should exist in its current form. Yes, algorithms can't match the scale of the bad content before we hit AGI. But that problem only exists because we have for-profit companies hosting way more content than they can afford to police on the thin margins ads provide. (Twitter, for example, makes about $1 per user per month.)

Prior to the late 2000s, this problem didn't exist. In alternate universes, it surely doesn't; there are many ways this could have gone.


its hard to say the problem isn't really solveable when nobody has really tried to begin with.


There are academic researchers and private teams all over the planet attempting to find solutions to disinformation, bullying and even spam.

Of course, it's pretty clear by now that there isn't going to be a magic bullet that is going to make everybody happy.


People are trying, what are you referring to?


Not OP. My impression is that a lot of the focus is around safety and specifically privacy. Both play right into social media giants' hands.

Instead we need to target Ads. Almost all problems can be eventually traced back to ads. At the very least traced back to the money incentive that ads create, on any platform.


Thanks, thats actually a great answer.

I would go one step further and suggest that ads are an issue for certain types of Economic games or Markets.

Any industry that depends on Ads tends to consolidate, and has an issue of incentives - the more people on the network, the more likely the network is able to survive.

On a tech forum people assume that challengers have better tech - but I would argue that challengers actually allow for more salacious/engaging content.

This is what creates the race to the bottom.

If the race to the bottom can be stopped - i.e. an incentive structure created that stops engagement being the primary metric, then the rest of the downstream problems are largely prevented.

Thats my root cause assessment of the situation. However once I get to this point, any solution seems to be a mess of intersecting fields ranging from morality, legal constraints, issues with press freedoms, free speech etc. etc.

So... I guess how do we set up incentives to not allow the most "engaging" content to dominate?


Or

d) He realizes that there are opposing factions with different ideas of what needs to happen, and it's impossible for his company to please them all, so pushing the decision to some semblence of a vote that claims to represent everyone is the only way to put an end to the endless arguing.

or

e) He believes what he wrote and doesn't think these social issues should be decided by corporations.

Either interpretation is fine and a lot more generous than yours.


He also wants the legislation to be toothless or misplaced so Facebook can pay lip service and not be fundamentally altered. Haugen just offered congress more surgical solutions than they were drafting as well as valid critiques of their drafts. That's why this particular post from Zuckerberg comes with such a large side of koolaid.


That's an extremely pessimistic view of regulation. Somehow other industries manage to do fine despite it. In the EU, GDPR somehow hasn't snuffed out all small businesses.


I mean, the harmful effects of regulation are much less visible then the harmful effects of what is being regulated. We don't know to what degree the GDPR has snuffed out potential small businesses.


> In the EU, GDPR somehow hasn't snuffed out all small businesses.

Simply because there was enough regulation in place in that area that they didn't exist in the first place


"The long term profit maximization strategy was to maximize profits only up to the point where you risk getting regulated by government." That is correct, but as with Wall ST, the addiction to the gamble has long been unhinged by an environment that has been lax on regulatory options...so when the blowback actually hits, it will not be just painful it will likely come on the tail end of a ragnarok-grade event that will likely shank the whole industry for a generation because current C-suites are so far up their own asses they can take direct inventory of the last few days of meal plans.


Zuck started Facebook as Facemash, a site to judge attractiveness of students by other students. It was a creepy site then and has remained to this day. Sorry, but companies to take after their founders, because the founders set an example of what is OK, how to behave, what their values are, etc. While he may not be responsible for all of the actions of his employees, the buck stops with him.


Your sentiment reflects what I see on teamblind.com. It's all TC and leveling up. A ton of people don't give much of a shit about the subject matter.


If we go by the content of teamblind, then every company's engineers only care about TC and leveling up.


Are you saying that people on teamblind are only focusing on the TC aspect, and also care about many other things, or that only a subset of people only care about TC? If the latter, it’s still worrisome that so many of those people who hop around chasing only TC and not caring about subject matter, are highly valued at these companies like Facebook. Glib and shallow E6 and E7 don’t make for great tech leadership in a company that supposedly cares about ethics and safety.


Seems like very few engineers that make it to E6+ are the ones rabbling on about "TC or GTFO" on Blind. It's mostly more junior engineers or bad engineers that will never make it past the 2nd or 3rd rung of the career ladder.


I get your point, but glib and shallow climbers are exactly what you want for a company that only supposedly cares. They're very good at supposedly caring!


Blind is a toxic community, in my opinion.


> I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.

Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?


I worked at FB (not for very long) but you can trace everything back to the awful performance process they have (the infamous PSC). At the end of the day, hard to measure stuff doesn't get you promoted while tangibly moving metrics does. If you incentivize people that way, it doesn't take anyone WILLINGLY doing anything evil to end up with a pretty evil thing on your hand.

If you optimize for profits and only that, you always end up selling crack cause it's the best business in the world, and that's why it's illegal.


I agree their incentive structure is at the root of this. But this is an incentive structure designed by one group of conscious actors and then followed by another group. A bunch of people choose this. And given the many years of public critique of Facebook, they can hardly be unaware of what they're choosing.

The truth is that almost anybody could sell crack. Most of us choose not to.


Too many problems to care about and not enough time? That's the human condition. What defines us is the choices we make, the priorities they set.

I can't know what they felt when they made those choices. But I can see the choices and the outcomes. I get there's some theoretical difference between willfully fucking people over to get rich and being so blinded by eagerness to get rich that you fuck people over as a side effect. But either way they worked very hard to get positions of power that affected millions and then were indifferent to the harm they caused, so it's not like this happened by accident.


Depends if not causing harm to users was an executive KPI or not.


How would "harming users" be defined for an executive KPI?


Abuse is measurable in all sorts of ways. The most clear one is having experts take a look at a random sample of users and see if they're being abused. You can back that up with interviews to look for both their take on what's happening and a variety of trauma markers. And there are all sorts of other measures that correlate.

But if there somehow weren't ways to measure it? Then they would have created a product where they couldn't even tell that they were harming people. That right there is something that shouldn't exist.


Follow the money


Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.

Even worse, preventing abuse and other social media ills often lessens engagement, and you know what that means.


>Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.

Quite a bit, then?


There is a phenomenon I have witnessed working both in high growth startups and traditional Fortune 500s. At some point, the company starts attracting Dark Triad personality types that cement themselves in upper management positions, usually starting at Director level. These people are extremely dangerous, one of them had access to my corporate laptop (as was standard policy for that company) and would torment me by screwing with me on a daily basis.

When an organization becomes too large or bureaucratic, these Dark Triad types hide and typically exert their influence and power and will behind the scenes. This is why these companies seem “evil”, but it’s usually not the founders’ fault, a lot of times they’re unaware of it, or one of the founders is also a sociopath and will protect the evil cabal. That’s my two cents about it anyway.


Extremely insightful, I have had the same experience and I agree. I somehow have the ability to "sniff" out these types pretty quickly, something about their conversations give them away.

Here's an real life example: I worked for a small startup years ago and the founder invited the engineering team to lunch at a restraunt. The founder proceeded to berate the waiter for no reason and yell at him, it seemed like completely psychopathic behavior and then I caught the smallest of smiles from him after the waiter walked away beaten up(metaphorically speaking) by the barrage. I knew right then this person was not someone I wanted to work for and put in my notice shortly after.


You can tell a lot about a person by the way they treat service workers.


This happened at a company I was at. Hyper-growth startup, huge aura around it.

A few high-level folks arrived whose perfection in smooth talking was rivaled only by their enjoyment of wreaking havoc on teams & relationships. They brought in their friends, paranoia and rumors spread, culture went off a cliff, CEO was confused what happened. Mass exodus followed.


How do you filter against hiring these types given their talking and schmoozing get them over the HR guard as well as director and above interviewers?


You don't just filter to hire, you also unfilter employee concerns. Most people think that CEO's, founders, owners, "management" are not going to back them, and they're generally right. Therefore most problems never skip the chain, and end up being suppressed by their own supervisors.


Assess for technical skill and raw tactical ability. This would have the downside of filtering genuinely good leaders who've been too far removed from technology (and thus are pure people leaders) for too long. People that were once technologists, but no longer are, have a way of speaking that's easily clear to pick out. Also, some people move up but keep their technical chops along the way. An executive that can microscope on parts of the org, as needed, with a technical mindset, can provide material value.


I don't like letting people bring their own teams with them. It's a shortcut to a discontinuation of culture and a form of second-order nepotism.


I do it via interviews oriented on doing the actual work. The more an interview tests the ability to talk about work and be charming and persuasive, the more it advantages awful people.

For software development, that's the opposite of what I want. Some of the best people I've worked with were terrible at interviewing. But once we got into actual code, they settled down and their skills shined through.


How I find them is in their speech. They come across as extremely insincere and "out" themselves by how they talk to either the interviewer(focus alot on themselves) or someone they believe is beneath them(dismissive). If you are a gatekeeper for their employment or something they need expect flattery and overly kind words. If you are no longer that gatekeeper expect to never hear from them again or abuse. Also this type tends to lie alot, thats usually how they do get canned.


"Just call BS! Geez, that was easy. Why are all these HR people so incompetent?"

This assumes you are more intelligent than them, and can see through their insincerity during an interview process.

But have you considered that these "smooth talkers" can be smarter than you? Or at least have been perfecting their BS craft (while you perfected yours, such as programming), so that you are absolutely no match?


For sure. Standard lines of BS don't work well on me because they're optimized for other people. But I firmly believe that for every person, there's a line of bullshit they're vulnerable to. And the people most vulnerable are the ones most sure they're too sharp to be BSed.


You're absolutely 100% correct. About 2.5% of people have have an inverted viewpoint of their own survival. Meaning, they think they need to squash others down in order to feel superior. As opposed to the logical path which is raise your team together and effectively lead to greater prosperity for all.


I think the bug in most organizations is that performance is reviewed only by the people above with very little to no input from below.

edit: Took me down memory land to re-visit some truly incompetent and insanely unproductive bootlickers and fast talkers who would lose their job immediately after a 5 min confidential talk with any of their underlings.


For sure. A friend at Apple says that it was much better to work at when it was not as successful. Now that they have large piles of money, it attracts people who seek proximity to large piles of money.


Reminds me if Mao, he'd always say whatever it took to gain power. But when he had it, he used it solely for personal gain or for personal goals... regardless of how many people got hurt, or killed as a result.


Mao Zedong was a "founder" of a militant "startup" called the Communist Party of China during a vast civil war; his experience has virtually nothing in common with a non-founder career climber in 21st century SV.


Well, there could certainly be personality and technique overlap


Totally agree with you, most of the time founders are not aware this is happening down the ladder. I have seen this scenario with the middle layer management protecting their jobs and gate keeping in most companies today.


Either the founder isn't aware, then they just get egulfed in the new culture until they have, at some point, no power left. Or they are among the worst of the pack and drive the new culture. Given that Zuck somehow managed to retain 50+% of voting rights until now, I would put him in the second group. Jeff would be the same.

And then you would have the truly exceptional people, that manage to combine the ruthless drive needed to grow a successful company, care about their people and have the capabilities to keep those bad actors out, or at least in check. I saw maybe 1.5 of these people in management functions, middle management that is. Not sure of those black swan unicorns exist as founders so.


This is my life right now and it’s utter hell. The company is infested with these people and it’s astounding to me that nobody seems to care.


Game of survivor played out in real life.


> they might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.

Isn't it just about that in the end? I think being good or not is about whether you give yourself the room to do the right thing even when other pressures exist – because they always exist.

Being good can be hard, because sometimes it means you have to abandon your usual priorities and stand up to the consequences which will emerge from that decision.


I completely agree with you. The real judge if someone is doing something good, is when they do it despite the consequences. Otherwise it's meaningless, they simply are not being evil.


Just curious - given your boss's boss is so self-interested, what advantage could he gain from pushing out a subordinate and laying off all the people below?


Whenever I am having a hard time understanding a situation, someone's motives etc in the world of business/politics, I start with follow the money, and it helps. It might sound cliche, but it is also true in a majority of the cases


This is exactly the point Frances Haugen is making, and it's why this is so different and so much more significant than the other Facebook scandals and leaks in the past.

Haugen repeated over and over again her testimony today that Facebook is full of smart, thoughtful, kind, well-intentioned people, and that she has great empathy for them, even Mark Zuckerberg. Her point is that they have created a system of incentives that are inexorably leading to harmful outcomes. It is not about good and evil people, it is about the incentives. It's exactly as you are saying.

That's why she is not advocating to punish Facebook for being evil, but rather to force Facebook to reveal and permit research so we can understand the system and fix it, because Facebook is too deeply trapped in its own tangle of incentives to fix itself. In this I think she is absolutely correct.


"Facebook has created a system of incentives that are inexorably leading to harmful outcomes" Exactly right. The solution baffles me. "Force Facebook to reveal and permit research so we understand the system and fix it" Basically keep the harmful system in place, but pass the reins to an unspecified cabal hiding under the innocuous word "we". Hard pass.


The "lean in" people and incentives have made society suffer for profit. Perhaps we can define a better set of incentives that reward companies of people building products.


We can. It's called pay directly for the services you use. It is a time-honored system where you give providers money in exchange for goods and services. In response, their incentive is to keep you happy and healthy and prosperous so you can continue to give them money.


No, their incentive is to get your money and get other people like you in case you die on 'em. They don't need you specifically, and they extra much don't need you to be prosperous.

The message 'you can save all this money, using us!' always means 'you can spend all this money with us'. I'm not faulting the general system or even your point here: I am, however, suggesting that while the system is fine it does NOT in any way imply that such people have or feel ANY incentive to your well-being.

You could maybe make a case that such a company might feel an incentive to the POPULATION it depends on… but even then, I feel like that might be mythical. In theory you don't want to eat your own seed corn, but such incentives toward good behavior are so easily ignored… and even if they are honored, it's a collective concern, NOT personal.

They don't care about you, and you are damn lucky if they care even a little about your wellbeing as a class or demographic… most likely they do not. And that's where the system tends to break down.


> In response, their incentive is to keep you happy and healthy and prosperous so you can continue to give them money.

Their incentive is to find a way to get your money; we can see in the world around us that many of them have no problem if you're insecure, addicted, and indebted.


Which will never happen. That takes customer impetus, it's not there. People don't understand the cost of the free products they use, they are unlikely to switch.

So what ways would influence your outcome to actually happen? Because I think it would be the right way to run software platforms as well, I just don't see a pathway there that isn't heavy handed.

I would be for regulating the advertising industry, since I feel it is the root of all this. None of the unethical software magnates would exist if not for the advertising dollars pouring through the door thanks to the ad-tech apparatuses they have built, and the poor incentives that creates. But that regulation is challenging and unlikely too.


I think a freemium model would be better. You should have to pay for having a large number of followers/friends past a certain point.

For example, maybe an account with 1,000 friends is free. Up to 10,000: $5 / month, up to 100,000 $50/month, and so on.

If you're Kim Kardashian with 250 million followers and you're making millions of dollars hawking skin cream or whatever, you can afford to pay a few thousand dollars a month to reach your large, valuable audience.

This way, the content creators can sell ads if they want. The platform doesn't sell ads. Users only see ads if they follow a creator who has sponsors. It's up to that creator to make their content worthwhile enough for people to choose to follow them in spite of the ads.

A platform should be like a company that sells TV broadcast towers. They give people a way to reach an audience. What that content creator does with their audience is up to them. Maybe they could charge a subscription. Maybe they get sponsors. If it's a large non-profit or government organization, maybe they pay at a lower rate or get to use it for free.


The problem is, FB stock would drop 90% in an instant if they were to announce this.


Yeah, it would have to be something new. Facebook is too entrenched in ads that change.


end algorithmic recommendations


Ah yes. Get rid of Pandora and GoodReads because what they're doing has basically the outcome.


Granularity matters. Social feed granularity is small enough that an algorithm, even primitive, can sketch an arbitrary narrative on the spot by juxtaposing unrelated items, akin to a ransom note built from letters cut off from different publications. Pandora and especially GoodReads have large granularity, making it difficult to employ in the same manner.


Very different outcomes I'd say. Are friends and family getting torn apart on those platforms? Do they need armies of moderators to remove abuse material or fact-check posts? (I'm sure there's some, but not on the same scale as Facebook.) This is the first I've ever heard such a thing suggested, and certainly haven't observed it personally.


I was being facetious. GP advocating banning all recommendation engines.


Maybe subject algorithmic feeds to public oversight.


It is subject to public oversight. It's designed to give the public exactly what they want.


That's like saying a slot machine is regulated by the gambling addict.


TikTok wouldn't be allowed to exist.


It's more nuanced than this. You can watch her full testimony here if you're actually interested: https://www.youtube.com/watch?v=GOnpVQnv5Cw


It may be that facebook can't fix itself, but what makes anyone think an even larger and more powerful organization is the answer and won't itself succumb to its own system of incentives? She is pushing for the equivalent of The Ministry of Truth.

Relevant testimony here: https://twitter.com/gillibrandny/status/1445451624005001217

Remember, this is the system of incentives that had us spend 20 disastrous years in Afghanistan, across both parties. And has failed to deal with climate change. And healthcare. And education. And wealth inequality. And housing. And... Siri, what's the definition of insane?


Nothing in her testimony referenced a "Ministry of Truth" or any type of censorship. She specifically spoke against "content-based" solutions.


It's right there in the link OP provided.

Senator Gillibrand: "We need a Data Protection Agency."

Frances Haugen: "And there needs to be a regulatory home where someone like me could do a tour of duty after working at a place like this"


In what universe is "Data Protection" a "Ministry of Truth"?


You realize the whole point of it being called the "Ministry of Truth" was that the name didn't reflect what it actually did, right?


I shall now live my life in fear of the Department of Motor Vehicles. Who knows what they do?!?!


This one. Just give it time.


By the way let's give a name to that system, it's called "PSC". Google it. It's the most absurd and ineffective performance management system I've ever witnessed.

It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.


> By the way let's give a name to that system, it's called "PSC". Google it.

Unless you mean the Florida Public Service Commission, PSC Motorsports, or Pensacola State College you're going to have to be more specific.


“PSC stands for Performance Summary Cycle at Facebook“

At least according to https://www.thelayoff.com/t/13z18mcE when I searched for “facebook psc”.


you are correct!


> It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.

That's a bit of an over dramatization, PSC is just peer feedback, and is very similar to Perf reviews at Google as well as other large SV tech companies. Having done both I didn't experience this "Hunger Games mentality" you described.


The descriptions of PSC I've seen don't look that bad. Do you have details about what's wrong with it?


> they have created a system of incentives that are inexorably leading to harmful outcomes

If the people inside are "smart, thoughtful, kind, well-intentioned people", they would have tried to work around the incentive, influence them, denounce them, or quit.

It rarely happened. Most of the time, the just take the money, and goes with the flow.


A long time ago, I worked in a startup full of smart, thoughtful, kind, well-intentioned people. Of course, there was also a CEO who was a ruthless manipulator and managed to make everyone believe that they were working for the greater good. In truth, in the course of lining his pockets, he was in the process of destroying several employees and former employees.

Getting past the illusion was hard.

Taking a stand against said CEO while nobody else was aware of the problem? Really, really hard.

Now, instead of a few dozen employees, Facebook has tens of thousands. I assume that all of them are subject to permanent propaganda, as in many tech companies, and that the semi-official word is that they are being misunderstood by the rest of the world, because of course they are doing the right thing but the problem is harder than people think (well, that last part is true, at least). I suspect that it's even harder to go against the flow.


Yes, that's what Zuckerburg said.


Incidentally, that's also a good explanation of things that appear to be conspiracies.


How is giving access to user data for "research" is better than that whole data privacy scandal with Cambridge Analytica.

these days research comes with a set of politically charged assumptions, for example the definitions of "hate speech" and "misinformation" are different based on which political camp you ask

So giving access to Cambridge Analytica is bad but to some other partisan "think tank" is fine? who would make those decisions?


The government has systems in place for doing this. For example, the SBA has tons of data about small businesses across the country. You can get access to it... IF you are part of a research institution and go directly to their dedicated research facility so that you can't exfiltrate the data. Such a model is open, just not free. It would probably be the right model for this issue.


I don't think it's an emergent property, I think it's a by-product of the constraints. It's all well and good that they want to make Facebook safe and healthy, and I honestly believe plenty of people working there are trying to do just that. However, they are operating under the constraint that they cannot move backwards on profits, and therefore engagement.

Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.


Nice.

> I don't think it's an emergent property, I think it's a by-product of the constraints. > Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels.

There is one person who controls all the constraints: Zuckerberg. He even went so far as to enforce that through his stock classifications. It’s entirely understandable and acceptable to have empathy for those working at FB who are attempting to solve the problems. But Zuckerberg made the decision to be the single source of the constraints that bind everyone below. And his constraints are: profit over all else. He should face consequences for setting those constraints, just as anyone should who set a constraint of “address climate change without adversely effecting GDP”.

Separately, and as the “revelations” of Zuckerberg’s immoral behavior continues year after year, those who work for him but are attempting to solve the problems, should recognize at some point in the future, now, or in the past that the problems are insurmountable within the confines of the constraints. As that knowledge spreads, then the question becomes whether those idealistically earnest individuals are justifiably ignorant of the reality: that all their best intentions are moot in the face of the constraints as were determined by Zuckerberg. And when or if they are no longer justifiably ignorant, they become culpable.


Zuckerberg is simply over his head and I think he knows it(I certainly wouldn't want to be in his shoes). I don't think he's evil I think he was enamored of this toy he built, he pushed it in very logical "business" directions, and now it's been adopted by so many people and its so big, its business model is having real world impact where I'm sure he'd prefer, from an intellectual perspective, that it acted totally passively. He's right, no business should have to determine the morals of a society, which is essentially what we are asking of facebook. The bigger picture is more complex than most people realize.


I think you are overly generous to him. He has extremely powerful tools at his hand, and he properly owns them and has absolute power over them.

But due to whatever reasons (ego, greed of seeing his net worth rising and fear of losing some of it etc.) he won't take morally right step that would harm FB's financials in any way.

On top of that, let's be clear - the mission of FB never was some altruistic connecting the world, in contrary - it was all that juicy private data on each of us while we are connecting and interacting, quietly building a shadow profile for every single human being. There is no moral high ground there no matter how much mental gymnastics you try. If FB would somehow leak those data publicly, the company would go bust very quickly.

In more than 1 way, I struggle to understand these whistleblowers - they get hired for tons of money into company with clearly amoral (or at very best dubious) mission and then they are surprised when it actually is... Similar case would be going to investment or private banking and then being surprised how business is set up and how decision makers in it behave


Nobody is forcing him to keep doing this. He’s waking up every day and making the choice to keep running FB today the same way he ran it yesterday. He could just quit


Sure, but... what would that change?


This is a rebuttal to the “he’s in over his head” argument. If he personally is in over his head, the obvious solution is to quit and let someone more capable run the company.

I personally do not buy the “in over his head” argument, fwiw.


Fair enough.


> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.

I like this. This helps me. Thanks.


> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.

This also happens to be the literal policy towards climate change of China. They announced the pause of funds for external coal plants, doubled down on their internal ones.


A rule more analogous to Facebook's presumed position would be, "you can fix climate change, but you can't do anything that would reduce GDP per capita". Which in practice means that while some useful tools would be on the table, others would definitely not be.


Not really since facebook revenue is much more directly tied to engagnement than GDP ia to fossile fuel consumption.

To extens the metaphor, Facebook's "alternative energy" is non-advertising based revenue. I see zero efforts from facebook to move away from ad based revenue so there is zero chance that Facebook is going to make meaningful progress in changing.


> I see zero efforts from facebook to move away from ad based revenue

Portal hardware. Oculus + Game Store. Facebook Gaming.

Lots of projects, no stable hits.


Maybe they realized internally they are as hopelessly addicted to fossil fuels (ad revenue) as most advanced economies were in the 20th century.


If Conway's Game of Life taught us anything, it's that even very simple rules can have surprising emergent behavior.


Agreed. Any moderately complex system can have emergent behavior. This is a fundamental feature of complexity. You can sometimes take advantage of it, by finding unexpectedly profitable features that only exist at scale or in conditions you happened into.

When that emergent behavior both increases profits and takes advantage of your customers in negative ways, the relationship moves from something symbiotic to something parasitic. This is where it begins to cross the line and people start throwing "evil" around when describing you.

I don't think most companies are out to create a worse world, but many do it until they are forced to reset.


I think a more apt analogy has advertising as Facebook's fossil-fuel burning, but then I expect severely curtailing fossil-fuel use will severely reduce GDP, which I am guessing is not a common belief around here. (I'm guessing that many around here think that it is essentially just the stubbornness of those in power that keeps fossil-fuel use high, and that even if we force the whole world economy to transition to 100% renewals over, e.g., the next 5 years, things will turn out fine.)


The hypothetical "most people" in this statement would be dreadfully, dreadfully wrong if they think it would all turn out fine. They are vastly under estimating how much the modern world, in literally almost any aspect you can imagine, is reliant on fossil fuels.


Yup. I've always said this. You must solve climate change but nobody anywhere is allowed to lose money. Go.

Conflicting goals. And the one you want to fix is secondary to not losing money.


There is geoengineering! :-)


What’s the equivalent of geoengineering for Facebook?


Offering every user a free Zoloft prescription.


It’s a fun idea!

Although if this plan were viable, one can imagine the Zoloft marketing department would have already purchased the ad space. The limiting factor is probably the need for a psychiatrist to approve a prescription for every patient. Most users in developed countries are already eligible to get the itself drug for free, or at least cheaply.


Spot on.


> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forest from the trees.

Ah, this is what I think of as Schrodinger's Accountability. Zuckerberg and Facebook's senior execs are simultaneously: A) so brilliant for running Facebook that they deserve to be incredibly rich, and B) so normal that they can't possibly be expected to understand the consequences of their actions, and so are morally blameless. Heads they win, tails we lose.

I say it's one or the other. If Facebook is too big to be understood, it should be broken up into small enough units that mere mortals can see the forest and tend it responsibly. And if not, the execs should be morally and legally culpable for the harm it does.


You may be missing the point if you think your point is orthogonal to theirs. Mark Zuckerberg doesn't have to be painted as a reptiloid in order for his actions to be bad, or for those actions to cause harm. More than blaming shit needs to get fixed right? We can still hold people culpable, but we don't need to do that, don't need to indict anyone before trying to fix a problem that is self perpetuating due to individual incentives and a complete lack of oversight.


The senior execs are the ones who set up the incentive systems there. They are the ones who are richly paid to provide oversight. So either this is exactly what they want or they're hopelessly incompetent.

And yes, I think we should change the incentives. We should change them such that executives face direct personal punishment for negligent or intentional harm. We should have learned that lesson during the 2008 financial crisis, but instead nobody did time. The worst that happened was that some very rich people were forced to give back a modest percentage of their gains.


Again, this

>>We should change them such that executives face direct personal punishment for negligent or intentional harm. We should have learned that lesson during the 2008 financial crisis, but instead nobody did time

does not contradict anything else.

But this

>>So either this is exactly what they want or they're hopelessly incompetent.

is incorrect. Those are not the only two options. Reality is typically more complex, and more boring than that. Doing harm does not require willful malice or profound stupidity and trying to reduce it down to that does no one any good.

Not that after investigation we could find that it was really one of those two cases for many people! But 'negligent harm', which is something you want people to be held accountable (me too!), does not require gross incompetence. It can be as simple as ignoring a couple inconvenient truths and being insulated from the consequences of ones decisions, or the cumulative outcome caused by the group.


I am saying that we end "being insulated from consequences" by choosing to reduce it down to those two options.

Think of it similar to handling explosives. Are they useful and important in society? Definitely. Are they subtle and complicated, such that working with them can easily harm somebody in ways that are not foreseeable to the naive? You bet.

But when somebody decides to create and apply explosives and hurts somebody, we don't just say, "Gosh, that's very complicated. Who could have known how it would work out?" We say, "You intentionally chose to work with something powerful and dangerous, so you're responsible for the harm you caused."

Is it more complicated? Sure. And I'm saying that when it comes to highly paid executives who seek out positions that put them in control of dangerous complexity, they become responsible for the outcomes.

They are already seen as responsible when it comes to anything good that happens on their watch, which is why they get paid such fast sums. I'm saying they should be seen as equally responsible when it comes to the harms. No more of this, "Oops we crashed the economy/poisoned a bunch of people/actively enabled genocide" stuff. All of that "more complex" reality becomes their problem if they are in control of it.


> So either this is exactly what they want or they're hopelessly incompetent.

I think the definition of success they're operating under is vastly different from yours.


what do you think their definition of success is?


Now adding “Schrodinger’s Accountability” to my list of mental models


What does breaking up a company actually do for the consumer? I don't think telecoms are any better for the consumer decades after we broke Bell. There is strong incentive to just form a cartel like telecoms today rather than a competitive environment that is beneficial for the consumer.


The breakup was a huge win for consumers. Long-distance rates dropped significantly due to competition and the telephone system became much more open.

And I think it dramatically aided early internet adoption. If you read "When Wizards Stay Up Late" you'll see how big a barrier AT&T was to the adoption of packet-switched networks, rather than the circuit-switched networks they sold people. How far would the Internet had gotten if AT&T had banned home modems [1] or priced early ISPs out of existence? They would have vastly preferred something like AOL, not the Internet, which destroyed their long-distance call business entirely. Look at how they behaved with mobile apps up Apple launched the app store.

Our problem was that we didn't stick with it. Starting in the Reagan era, antitrust enforcement shifted toward much laxer standards. So AT&T reassembled itself as a (smaller) juggernaut and kept going.

[1] I realize this sounds insane now, but one of the things the DoJ sued for is "Obstructing the interconnection of customer provided terminal equipment and refusing to sell terminal equipment, such as telephones, automatic answering devices or switchboards, to subscribers".


It's hard to say how things would have gone differently, but I don't think it would have been much different. Looking at the ISP world today long after the dust settled, it's not that much different from the Bell era in terms of consumer choices. I have one choice in ISP for my address. A de facto monopoly entrenched by a lackadaisical attitude towards expanding infrastructure connectivity.


As I said, our problem was that we stopped holding monopolies to account. The Bell breakup was a last major success of the old approach to monopoly regulation. The reason you have one ISP is not the thinking that brought you the Bell break up, but what came after.


> a competitive environment that is beneficial for the consumer.

You mean a competitive environment like this one? https://www.beingguru.com/wp-content/uploads/2019/06/top-mes...


They came back together again is the problem.


'Deserve to be rich' is the wrong frame. What is a sensible procedure to decide who deserves to be rich and who doesn't? The say so of powerful politicians? 'Raised to the top via a combination of skill, luck and shrewdness' is more accurate. The fundamental problem is that the world is governed by power laws. As the size of the ecosystem grows (hello globalization) at some point it becomes obvious that no humans can effectively control the largest of the emergent entities. We need to break up Facebook, we need to break up the Internet, we need to break up the global economic system. We need to add friction back into the world. A lot of friction.


Currently the worldwide hunger and infant mortality rates are at an all-time low while population is an all-time high (but the growth rate is quickly decreasing, so the threat of overpopulation has passed). Economic growth lifts a substantial chunk of that population out of poverty each year. Are you worried that you might immiserate or kill most of them when you break up the global economic system?


> I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.

Those same people are protecting their children with $300k+ salaries and buying property in area where they can send their children to Gunn HS. While I empathize with these people the direct opportunity to protect your kin should not be understated. Do they mean well? Sure. Are they putting their best effort to fix things? Sure.

Here's the most important part:

Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably. But they also know this means risking to protect their own children as a result (forced to move, lose job, less pay, etc.). What would you do? (I think I know the answer)

This can't be understated any further - in the end it doesn't matter what individual people at FB think because no one person or group of people has any legal, economical, or logistical ability to control the company except Mark Zuckerberg. He is figuratively and literally impossible to fight. Well, unless everyone deleted their accounts.


> Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably.

The crazy thing is that FB has taken steps to improve things in past that also hurt them financially (eg post cambridge analytica) . They just make so much money and so fast that its like 1 or 2 bad quarters and its over.

So (1) mark being all powerful means he alone can decide its worth lower profits - he's done it before.

(2) The loss of profits probably wouldn't even matter.


Without all the bullshit facebook is just AOL messanger and a group calandar, they need it

people need to interact with facebook non-stop for it to remain relavant


I also wonder how many of them forbid their children from using Facebook’s products.


I've been framing this whole thing as a universal property of human society and it seems to fit pretty well for me.

Outrage attracts attention in all group interactions. I can't think of a single large scale group forum where this isn't true. It's integral to an absurd degree in our news cycle. Howard Stern exploited this property in his rise to fame. It's a core element in state propaganda, well documented throughout human history.

I'm old enough to remember when the internet was a lot more free - when there generally wasn't some parent corporation imposing content censorship on what you put on your homepage, or what you said on IRC. All of the complaints regarding Facebook were true of internet communications back then too (on the "sex trafficking" issue, compare to Craigslist of yore!)

The big difference seems to be there's an entity we can point a finger at now. Communications on Facebook aren't worse than what was on the internet two decades ago. In fact, they're far, far more clean and controlled.

What I look to is whether Facebook is more objectionable than alternative forms of communication, and I can't find any reason to believe that this is the case. Is twitter better? Is reddit? Is usenet? No.

So why does Facebook draw such ire?

Are people calling for controls on Facebook also calling for controls on self-published websites? On open communication systems like IRC or email? Where is the coherent moral philosophy regarding internet speech?

To be honest, my biggest concern when I read the news surrounding this issue is that most of the internet might not be old enough to remember what it means to have a truly free platform, unencumbered by moralizing. Why are people begging for more controls?


I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).

But I think there are a few key innovations that make FB worse for human psychology than previous iterations. Chief among them is the algorithmic newsfeed designed to drive engagement. Outrage certainly provokes responses, but in a chronological feed situation, eventually threads would become so large that the original outrageous situation would be pushed far back and the outrage would go away. Algorithmic newsfeeds bubble these to the top and continue to show them as they get more comments/retweets/shares/etc. They reward engagement in a visceral way that offers perverse incentives.

Secondly is the filter bubble. By showing you content hyper-relevant to your search interests, you can easily fall into echo chambers of outrage and extremism. Internet communities, like IRC channels, had huge discoverability issues. Each community also usually had disparate ways to join them adding another layer of friction. Even if you were an extremist it took dedicated searching to find a community that would tolerate your extremism. Now mainstream platforms will lump you into filter bubbles with other people that are willing to engage and amplify your extremist posts.

Combine horribly narrow echo chambers with engagement-oriented metrics and you'll have a simple tool for radicalization. That way when you're thinking of committing a violent act because of the disenfranchisement you feel in your life and your community, you'll be funneled to communicate with others who feel similarly and enter a game of broad brinkmanship that can quickly drive a group to the extreme. Balkanization and radicalization.


> I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).

This is such a great point. The pre-Facebook Internet was full of anonymous random garbage. But everyone knew it was inconsequential garbage. Adding real names and likes changed all that: today garbage has gained legitimacy and is displacing prior forms thereof.


"the outrage would go away"

If there's one thing I've learned it's that the outrage never goes away. The type of people who fixate on outrage in their Facebook feeds are the same type of people who decades prior would cruise around town picking fights in person. I'm unconvinced that Facebook is meaningfully changing this dynamic.

I'm also unconvinced that the filter bubble is meaningfully different than what's come before. Humans have been sorting themselves into like-minded communities since before we could read and write. Do you remember the hive-minds of the 80s and 90s? If anything they were far more extreme because of the difficulty in proving anything, back before google and wikipedia. There was a lot more extremism and hate based violence back then. A LOT, LOT more, and no interventions like Facebook is at least attempting to provide.

Facebook has some new angles on old patterns in human behavior, yes. I think the people who're trying to show that it's made things work have a lot of work to do to make a compelling case. Facebook's biggest transgression is probably that it has chronicled this behavior and has dragged it into the light.


Very well put. When people say "it's always been like this" or "it's no different than X" – this is exactly the difference, and while fundamental human behaviors or impulses haven't changed, the design of the platform is changing how they are expressed.


We used to solve this problem by teaching people to have thicker skin so that we control the outrage regardless of the forum in which it occurs.

However for the last 10 years or so grievance culture has taken root and not only excused outrage, its proponents have actively encouraged it.

It makes me think of that scene in Star Wars where palpating is like “good, good. Let the hate flow through you”, expect we now have millions of people encouraging this.

How I wish we could rewind things to a world where foregiveness was still a virtue and we were all taught that sticks and stones may break our bones but words will never hurt us. Without such virtues, a world with outrage is inevitable.


I think this is an important point indeed. A piece of this puzzle, in my opinion, is that people are not taught this at home anymore. Most familes have both parents working full time and they're exhausted after work. Their kids are raised in daycare and neglected. And so many are raised in divorced/broken/separated/single parent households that compound the problem much more.

Furthermore, most of the US isn't religious anymore. These values and maxims mentioned above are not taught to people anymore, at least not to the degree that they were in the past.

A piece of this should be better training in the home for kids on how to understand the internet. To avoid being hateful and to question things. But so many kids are left to their own devices without parental oversight on this subject. I've even heard the call recently that parents want high schools and colleges to start teaching courses on how to avoid harmful content and misinformation online.

In what feels like ancient history, this used to be the parent's job, before both spouses were working full time.

Our kids and the younger generation suffer from lacking parental instruction on this.


> But so many kids are left to their own devices without parental oversight on this subject.

It's very "Lord of the Flies", isn't it?


> Communications on Facebook aren't worse than what was on the internet two decades ago.

Let's not underestimate the degree to which 'likes' (social affirmation ersatz) are eliciting the worst in people.


My point is that Facebook likes are simply a manifestation of a ubiquitous social characteristic.

We all get likes. Sometimes they're called upvotes. Sometimes they're called replies. Sometimes they're cumulatively seen as our status in the social pecking order.

Facebook doesn't add anything truly new or transformative here. These problems and patterns are ancient.


The patterns and problems are ancient, but convenience is a significant factor in terms of enablement and resulting harm. Humans and other animals have been vulnerable to addictive substances for as long as we can tell, but the level of effort needed to get high was much much harder before we learned how to process and distribute addictive drugs cheaply and efficiently.


Usenet isn’t social media and doesn’t have a feedback/reward system.


Usenet certainly does have a feedback/reward system. All group social interaction does. Trolling for feedback/reward predates Facebook by not just by decades, but millennia.


No one is calling for the internet to be less free, or have more constraints. They're calling for specific platforms to alter their interactions model to discourage toxic group behaviors at scale.


Seems like you more than anyone would see that solving the types of problems FB is trying to solve eg: freedom of speech vs user safety / harm reduction is not some super simple problem, no? I wouldn't call Reddit evil despite the fact that many powermods are both amazing contributors doing free labor and curating great communities while also simultaneously abusing their power every day to silence people they disagree with, shaping narratives in human culture, automating blanket unappealable bans on users for participating in unrelated subreddits (even if you were participating in that subreddit in order to combat its views), making snap judgments on content moderation that might ruin someone's day when they make a bad call on a ban or delete, or unilaterally self appointing themselves as mouthpieces for their broader communities via subreddit blackouts or preachy pinned posts.

It's unfortunate that when you build a product so close to the ground of human communication and human nature you're never going to be able to get everything right, and you're no longer solving technology problems alone but trying to basically combat basic human moral failing itself. We don't ask that of the telephone company.

^ That being said, we can only excuse some of their failures with the above line of thinking. Others we can blame on greed or recklessness, or ignoring the social costs of something like ML recommenders optimizing for engagement. Not sure if those things deserve to be called evil, but I'd still hold back personally. Misguided, overcome by greed, or reckless, perhaps.


Point of order: the issue with Facebook is the various engagement algorithms that they are and have been perfecting. This is unlike anything humans have ever seen before. We are no longer anywhere near to 'the ground'.


Yeah, there is a big difference between Reddit and Facebook in the above comparison. All the examples of issues with Reddit can more or less be attributed to specific people and fall more in line with "bad" human behavior. Facebook's algorithm is something entirely different in it's design - it's primary objective is to manipulate the behavior of the user on the other side, and what it chooses to show or not show doesn't follow any human line of reasoning outside of some loose built in "safeguards" and unenviable content moderators meant to serve as the guardrails.


As other have said, my experience with Facebook just doesn't mirror the "angryness" and hatred that other people are seeing. My Facebook stream is just every day things from friends I have made through the world. It is very useful for me to maintain a bit of a touch with people I esteem so much but with whom I've list touch through the years.

The "angry facebook" experience to me seems like the moms against heavy metal / twisted sister case: People are seeing a reflection of what their peers share.

If their circles are angry and share disinformation, that's what they will see.


I've also never had an issue with Facebook. I've been online through usenet/irc, AIM, livejournal, and then forced to join Facebook because everyone at the university was using it for class correspondence. Later, I have exactly your sentiments, that it has allowed me to stay in touch with people I would have lost touch with over the years. I take advantage of some of the groups for my industry, and my hobbies. I use our company's Page to interact with a whole segment of our international customer base, that would never think to call our support telephone number or e-mail. It's never been a negative experience for me. Although I only look at it when I get home from work at night on my desktop computer. And don't ride around all day with the app running in my pocket. I don't quite know if that would make a huge difference though.


This is true too. And the majority of Facebook users I know unfollow people who post political topics.

It is about the circles.


What I hear from Zuckerberg over and over is "we're good people and working on it, look at A and B things we're doing" with an implication that that's good enough, so what's everybody up in arms about? That's the core of his tone-deafness to me. If Zuckerberg is fully honest, it means he basically just doesn't have a grip on reality and he isn't fit to lead a corporation this big and impactful. And I tend to believe that, because he's ultimately just a college kid with a laptop who ended up in some circumstances that snowballed.


> ultimately just a college kid with a laptop who ended up in some circumstances that snowballed

When will this “just luck” characterization of Zuck die? His entire company was certain they should sell for $1B, and most executives resigned when he didn’t. He maneuvered control of the majority of voting shares, how many other founders have done that? Instagram and WhatsApp were genius acquisitions everybody at the time clamored were too overpriced. Even Oculus has turned out to be the leading VR platform. All of the people close to him attest to his extreme intelligence.

Whether malicious or not, Zuck didn’t just “aw shucks I got lucky” into the majority owner of a $1T company, cmon…


Nah, he's really smart, but implying 1T is a measure of his genius is ridiculous. Not only does it downplay the massive contributions of hundreds of people including Peter Thiel and Cheryl Sandberg, but it ignores the market conditions that led to thefacebook.com going viral, not to mention the Winklevosses, who got paid billions in today's valuation. Do you believe that if Zuck never met the Winklevosses, he would have necessarily built a 1T company anyway, because quantity X of genius must necessarily manifest to F(X) valuation? I think the market violently disagrees with you.


I'm disagreeing with the "just a college kid" portrayal. There are of course a few circumstances greatly helped the trajectory, as did many other smart people help it along. What I'm trying to imprint is that without Zuck being very intelligent, the level of success Facebook would've had would be far far smaller.


I agree, thanks for the clarification. I shouldn’t have been as brutal in my initial portrayal.


It will never end because people have the inherent desire to tear down those they don't like.


> So maybe Zuck is telling the truth here, that they are trying to fix all this.

Except they are just playing around with the outrage algorithms, the problem is created by Facebook, not some natural occurence. If they wanted to "fix" anything they would make their algorithmic timelines opt-in, or at least an option, for starters.

It is of course very much in the interest of the people working at Facebook to make this seem like a problem that is just there and that it is some "difficult to solve", that "moderation doesn't scale" etc.. These are deflections to make everyone ignore that Facebooks tampering is where it starts.


This, their entire premise of modifying their engagement-optimization to try to account for wellbeing but still optimize engagement is flawed. It’s clear that outrage and anger drive engagement over all else. If they want to fix things they can just bring back chronological feeds; but they won’t because the incentives are just too misaligned.


Conveniently, everyone knows this except the people who stand to lose a lot of money by saying it out loud.


I wonder how Facebook’s algorithm works.

I know YouTube’s just recommends based on what you just watched/search (you can disable this aspect by clearing or disabling your histories), channels you have subscribed to, (I believe) videos you have “Like” or commented on, and videos you have marked as “Not interested -> I do not like this video”.

Is Facebook’s as “viewer driven”? Or does it recommend based other criteria? e.g. like what’s generally popular.


Good people have gone to work at facebook (and google) on jobs like privacy engineering and really try to do good work.

however, no matter how capable and ethically sound they are, the incentives are forever misaligned with the profit models for both, and adtech over all, as it currently stands. truly good people can chase money and hope to do good things in the process. it's as easy as this.

the writing was on the wall when alex stamos, by all measures the best example of the type of person you're referring to and FB's chief security officer... left. started in 2015 and was out of there by 2018. not many c-levels walk away from a job like that for the reasons he did, and when they do that should be the even to pay attention to (looking at you, sheryl "lean in" sandberg). this was the marker event if people were looking.


This is called the Banality of Evil. Look up Hannah Arendt. It is a well established idea.


> It is a well established idea.

If by "established" you mean that it's well known, then yes, you're right. If instead you mean that it's agreed-upon or widely accepted, you'd be wrong. There's a lot of great debate / critique, both about how well the phrase actually applied to Adolf Eichmann himself (Arendt was famously only at the trial for like 5 days), and whether evil in general is ever, in fact, all that banal. Sadly the conversation around "the banality of evil" hasn't received a fraction of the attention that the phrase itself has.


>"Arendt was famously only at the trial for like 5 days..."

Besides David Cesarani's "Becoming Eichmann" book from the mid 2000s where he stated Arendt had "only saw Eichmann in action for four days", are there any other references that support this? I've not been able to find any. Also the "in action" in that context specifically refers to Eichmann's testimony not that amount of time Arendt spent in the court room. I'm not sure how "famous" this is. Elsewhere her correspondance with her former teacher Karl Japsers indicates she was there for 10 weeks. That would be about a third of the 8 month trial.


I don't think this idea matches what parent poster is saying.

Banality of evil is about how ordinary people can work on evil things while not being sociopaths and still being considered ordinary people. But it also presumes that there is some truly evil / sociopathic force driving this through authority, such as Hitler himself in case of Eichmann.

On the other hand the parent poster is saying that Facebook is simply too big to not end up evil, that evil is an emergent property of the million different processes that is Facebook. That view absolves not only regular workers of Facebook who are helping the company achieve evil things, but also the people who are actually in control of the company – Mark Zuckerberg and his senior executive team.

Personally I'm not buying either of these absolutions, but especially not the grand universal absolution that the parent poster affords to the whole company.

Ultimately it is someone's decision to put profits above everything else. Engagement doesn't excessively optimize itself. Users' contact books aren't getting stolen by themselves. Shadow profiles don't fill up themselves. "Just doing my job" is a choice, not an excuse. Many people are complicit in making and implementing these decisions for their own benefit, and they are all responsible for the outcome.


I don't think that's what the OP means, though. It's not "decent" people doing evil things. It's great people doing great things, within an organization that also does bad things.

There are some amazing people on their safety and moderation teams. They're also fighting marketing algorithms, I'm sure.


Eichmann in Jerusalem is the book that coined the phrase for anyone passing through, and it's a pretty wild story.

It's essentially Arendt, a Jewish exile from Berlin who fled the holocaust, wrestling with her realization that Eichmann, who reported to Hitler and organized major portions of the holocaust, wasn't a psychopath, but a completely mundane and thoughtless career focused bureaucrat who was trying to rise in government and believed in doing what you are told, who then organized one of the most evil acts in human history without reflecting on what he was doing.


It’s like saying that the people working in the slaughter houses are actually kind folk who do like animals and care deeply for their well being. That can be absolutely true, but they still work for a slaughter house. Your care and trust doesn’t matter a bit because the fundamental nature of the organization is that it profits from cruelty. I understand it pays well, and that maybe they are trying to be nice and all, but yeah there’s only so much purity of heart you can insist while still working for the slaughter house that is Facebook.


I actually think this analogy is the very opposite of what you may be trying to explain.

A lot of people that work at slaughterhouses do so because they have no other choice. It is the best opportunity that's afforded to them. It is a job that causes trauma for many, often has long, grueling hours, and doesn't pay well.

Working at Facebook couldn't be further from that situation. Never mind the obvious perks (the tech, the white collar work, the gourmet food, I hear there's also a wood shop where you can go do woodworking on your break, the half a million dollar salary, etc etc etc). But the overwhelming majority of these people have the whole world of job opportunities to choose from, if they're willing to take a pay cut from an INSANELY HIGH salary to just a VERY HIGH salary.

So in that sense, they couldn't be further away from working at a slaughterhouse. The fact is, they could quite literally work anywhere else (any other company or any other city/country with remote work now), and they choose not to. It's not desperation but the textbook case of golden handcuffs.

It's very, very difficult to say no to 500k a year. I'm not even sure I could say no if I were in that position. I'd probably tell myself "Just coast for two more years and both my kids won't have to pay for college" or something like that, and keep going.


I have said no to a Facebook offer before (I actually recommend everybody apply to Facebook, they give great offers you can use for negotiation elsewhere and it wastes their time). Like you said, we’re talking about the differences between insanely and very high here. I don’t think a 20% increase in TC is worth making the world a worse place, and I’d hope for most people that’s not a hard decision.


So a person who owns a farm with dogs as pets, chickens, and cattle (which he will slaughter at some point) is cruel? What a dumb analogy.


> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.

Don't fall for words!

Frances Haugen was able to see the big picture. The documents she presented had Facebook employees mentioning it. Facebook didn't act on what was known. It is not that it wasn't known.

To paraphrase John Roberts - the only way not to do a thing is not to do that thing.


It's systems-level thinking applied to people. Reader beware though, once you start down this path, you can become adept at spotting this pattern emerge in many other human systems.


The statement that made it really clear to me was facebook has moderators for 50 languages... while supporting 111 different languages [1]. It's wildly irresponsible to offer services in a language you can't moderate in.

And it's sure seems an intentional part of their fig leaf denial strategy -- viz the recent revelations about human trafficking on fb in arabic [2]. Or armed groups in Ethiopia inciting violence on FB in ways that fb chooses not to monitor because of language issues [2].

A company with 21Q2 revenues of $28.5B can't hire moderators in languages spoken in countries with low costs of living... It reflects a thirst for growth with no thought given to the people affected by their growth.

[1] https://www.reuters.com/article/us-facebook-languages-insigh...

[2] https://www.wsj.com/articles/facebook-drug-cartels-human-tra...


That’s sort of how things are where I work. The systems are so complicated and the interactions are often algorithmic and machine learning based. We try to maintain documentation and architecture artifacts with as much accuracy as possible. But in some cases things may as well be magic because no one really understands the whole process.


The Skynet is here. Who knew it would come for us with outrage causing memes and not metal robots.


The human element is not a variable we define in code. There are things that, by the nature of how they're used, become harmful. Intent does not matter. Good people can intend that their new free anonymous file sharing service will be amazing. Until it's used by bad actors. The concept is good, the intent is good, but in practice it doesn't work that way.

There's also another concept, the reality that people do not actually care as much as we think they do. There's a program every public school in the U.S. has where kids run at each other, at speed, knock each other to the ground with concussions, tear their muscles, break their bones, and have terrible behaviors towards one another. Yet every school still has said program. Parents encourage their kids to join. We just don't care about whats right.


> The human element is not a variable we define in code. There are things that, by the nature of how they're used, become harmful

Agree, but I think we should look at guns first then. Invented to kill, and yet we let people mass buy them for fun


Not all evil looks that way to outside observers, unfortunately. I believe that the assumptions of FB that allowed it to get so big, "optimize engagement above all else", built a system that in many ways is at odds with the values of our society when everyone is a user.

Internally at FB, everything looks good, you hit all your OKRs and believe users are better off. Maybe you don't, but you're bonus is huge so you'll put your head down and keep on. Externally, it's an entirely different picture. Connecting people is a comically small issue society needs FB to solve, relative to our need for them not to harm children, or promote extremism, or hide research when testifying to congress.


> The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening

> so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.

I think, honestly, that a huge thing is that when you put together basically the entirety of the internet, and society into a giant conversational feedback loop you're bound to spin out the worst, especially if FB wasn't 100% of the time trying to filter it (which they weren't because its a business and the problems weren't always equally known).


Don't you think it's more likely that he's just using these projects to garner some goodwill and sympathy?


What I'm saying is that I know people working on this projects, and they are good people who want to make things better. They wouldn't work on these things if they didn't think it made a difference, as they all have plenty of options on other places to work.


It's ridiculous to think a platform that most of humanity uses can be controlled to the liking of the left, the right, the upside down and etc ... cause all those groups make up humanity and we all do not nor will ever think exactly the same and we all have different motives and biases.

Misinformation that's been around forever... ever play the telephone game in school .. you tell one person a story they tell the next and the next and the next soon that story is no longer factual. Stir all that in with bias and things get even murkier!


I'm reminded of Terry Pratchett's image of the row of mugs (with cute little sayings) owned by the torturers of Omnia's Quisition.

This is a generally hard problem but it's as significant now as it was in the aftermath of WWII. I'd say it speaks to the reality of human subjectivity, and it never goes away: I can only wonder if the same will be true of AI, and whether it's possible for a thinking being to really internalize the concept of hard limits to their perception, and build that into their model of the world.

You could say the God concept is a way of trying to internalize the limits to perception: 'something is vastly significant and it's not me, and my understanding does not and cannot encompass it'.

With OR without this concept we as humans are exactly as evil as each other. That's the secret. There isn't a qualitative difference between 'us' and history's great monsters. It's about the choices we've made and how we've acted on them: the rest is rationalization, which we are all subject to in one way or another.

Grappling with this is the Nuremberg moment: the question is 'never mind whether you feel you've been good, what have you done?'

So, what have they done?


The complexity of the system is too great. It’s similar to how the economy runs, there are many very intentioned, intelligent (event brilliant) people who study and focus on it. The problem is it is so complex that no one can fully understand all the components. Not to mention the amount of people in Facebook and in the economy who are intentional bad actors.

I’m not saying they are blameless I just always have a tough time laying all the blame on a couple people.


I have a friend who started work there in the last three years.

It’s so big and so organized, they can come up with an idea for a new service or policy they want to implement and it takes roughly two weeks to get all the channels to approve and move forward on the idea. Implementation is different, this is just getting all the approvals from legal, finance, marketing, etc..

They are definitely in a position to make changes quickly should they need to.


Sometimes it is not the virtue of people in the organization, it is a function of the structure and incentives of the organization, the "emergent property" that you reference.

Imagine if a company had invented methamphetamine, but the ill effects weren't as readily apparent. Then they built an empire on the belief that the societal benefits of millions of people running around in a seemingly ultra-productive manic state were a godsend to society, and that they had truly changed the world. Then realize that the effects of Facebook are worse than that--it has the opposite effect on productivity, has maybe worse mental health effects, and is nevertheless highly addictive. The reality would never sink in inside that bubble. Worse, the tens of thousands of people whose jobs and wealth depending on tuning said meth to be as addictive as possible are...what? Pawns? Believers? Accomplices? Delusional? Regular people. They are regular people.


Its been said that this psychopathic behaviour is an emergent property of many corporations and emerges due to the nature of their very legal structure. In other word, the people may be fine but the outcomes can turn out not to be. See...

https://en.wikipedia.org/wiki/The_Corporation_%282003_film%2...

"The Corporation attempts to compare the way corporations are systematically compelled to behave with what it claims are the DSM-IV's symptoms of psychopathy, e.g., the callous disregard for the feelings of other people, the incapacity to maintain human relationships, the reckless disregard for the safety of others, the deceitfulness (continual lying to deceive for profit), the incapacity to experience guilt, and the failure to conform to social norms and respect the law"


They're in a tough spot by design. Much of Facebook is private. How can they possibly be transparent enough to satisfy critics about what actions they take? Share too much and another Cambridge Analytica situation pops up. Share too little and researchers decry coverup over lack of access.


The problem with facebook is that it plays with fire every day. Kills innocent people every day. But they have a fire department, so they can't be all bad right?

If you can't do what you do in a way that isn't this harmful to the world, then you always have the choice to just stop it.

These are all smart people. They could be working on anything else and be successful at it. But they are scared of change. The money is too good.

I just wish the people working at facebook that are decent, that they would just leave and go work somewhere else.

And that we stop debating whether Zuckerberg is redeemable. He is not. He is a psycho. He is why it escalates this much. He is a monster. Beyond all the lies, he intends the damage he does to the world. Maybe someone bullied him as a child. Maybe he is just not well. I don't know.


It’s easy for me to reconcile after living through COVID. There are people in my own family who have emotionally told me they’d never do anything to hurt their family. Meanwhile throughout the pandemic they have purposefully hidden when they have been sick and spent full days with their elderly family and immune compromised 3 year old, touching food and participating in cooking. There is a big difference between emotional and cognitive empathy.

I also think the people who make the biggest show of how much they care tend to be the same who don’t actually act in a caring way at all.

No, I’m not surprised at all that FB employees say they really care. And that they do so very convincingly.


Couldn't it be possible that the people you know try hard but are limited with what they can do because of policy and decisions that come from above? Stuff like hiding research that looks bad isn't something that a dev or even a manager decides.


I can care about many things. Eating healthy, getting exercise, reading.

My actions can show otherwise. IE: im going to go eat a slice of cake, sit down and watch TV tonight.

My priorities and actions don't appear to be inline and the result I'm going for won't be met.


> But no one can see the forrest from the trees.

Who are the people to be bold enough to speak for "everyone". You are definitely not speaking for me. I personally get a lot of value from facebook. I never had any problem with it in any respect. Use it to communicate with my family around the world. Used it to rent my apartment, sell thing on marketplace. I keep in touch with people I know. And have very thoughtful and enlightening political discussions that help me do the right choice who to vote for and stay informed. (The only other place with better discussions is hacker news thought)


There is only one way to fix this, prevent anyone from influencing what is shown more prominently to users. The simplest solution for that would be simply chronological order only from your friends.


It took me about an hour of clicking but I seem to have a FB feed that is just my friends in chronological order again:

https://www.facebook.com/?sk=h_chr

I had to go into my groups settings and unfollow them all individually though:

https://www.facebook.com/groups/feed/

Same for pages:

https://www.facebook.com/pages/?category=liked&ref=bookmarks


"The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening."

I half agree. I do in fact think its ben baked in from the get go, just that there was a period where it was not an obvious pillar; you could in fct do all kinds of essentially innocuous things and accept some surveillance capitalism with awareness of your wn liabilities.

It's now become so much larger and problematic, that the 'emergent property' is that every move adds weight to the need for the firms dismemberment into smaller units, or punishing regulatory limits. And I mean truly brutally, snap-noise making bone-breaking regulations.

"So maybe Zuck is telling the truth here, that they are trying to fix all this."

Nope. He knows if he wants to stay on top, he had t keep doing mpre of what hes done, and that his choices are otherwise to actually adapt, which he will not do.


Shouldn't it then be possible to account for and correct for the emergent evil? That's the point of government regulation is it not? Maybe then an appropriate, self-critical response from Facebook would be, "Yeah, our system is broken. How can we help?" instead of immediately going on the defensive. If they claim to care about the bigger picture, they need to acknowledge it without excuses.


Good people work for oil companies and car companies.


At this point of the climate disaster those are getting very rare especially in oul. Maybe those, if they exist, who are working hardest at stopping most oil production and enacting cap and trade.


If a company desired to be able to sow doubt if its impacts on society ever came under a microscope... one gambit (and an effective one, based on your reaction) would be to hire people who genuinely and passionately research and work on trust and safety, then systematically under-resource their teams and gaslight them into thinking there are fundamental reasons their recommendations must be ignored.

For instance, contrast Zuckerberg's statement here:

> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?

With such severe under-resourcing and deprioritization that one person had the literal weight of worldwide election integrity on her shoulders, as revealed more than a year ago: https://www.buzzfeednews.com/article/craigsilverman/facebook...

> The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It's also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that followed.

Haugen echoes the same in https://archive.is/tQwE9 :

> She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.

The fact of the matter is that if Zuckerberg were to say "I'm going to pour our profits into trust and safety and abuse avoidance in order to ensure that our position as a trusted brand is sustainable for generations to come," his high levels of voting control and clear defense to any allegations that this was against long-term shareholder interest would fully make that possible. The fact that quite the opposite has happened should be considered with much more weight than his words in a reactive press statement.


There is a German saying: "Der Fisch stinkt von Kopf her". 90% of all employees may very well have the best intentions, but this doesn't mean anything if the decision makers have not.

A company is not a democracy.

Indeed, we have probably just seen one of the (former) employees with good intentions struggle to stay true to them.


Facebook just needs to look at itself from a complex interacting systems point of view.

It is the structural incentives in the system that cause the problems.

So yes, evil and discrimination can be an emergent problem even though no individual intends harm.

Then you might also have bad actors who exploit those structural incentives/weaknesses.


> So maybe Zuck is telling the truth here, that they are trying to fix all this.

Maybe they are trying, but also maybe they are trying to have their cake and eat it too.

What I mean is that very likely the proper way to fix things would financially hurt FB, which seems it’s something they really don’t want to do.


I mean, that's what it might be. This might be the "banality of evil", an emergent property of social networks themselves. If this is the case then we have a harder question ahead of us as an entire world: how do we fix the problems of pandora's box?


So if it a form of a modified Hanlon's razor, "never attribute to malice what can be explained by lack of capability" particularly because they are too big, it sounds like the answer is to break them up so they aren't too big. Is that the solution?


Aaron Swartz wrote a great book review of Moral Mazes[1] which touches on this.

http://www.aaronsw.com/weblog/bizethics


So, anecdotally, the most amoral programmers that I've ever worked with have ended up at Facebook. I'm sure there are decent people there, but I couldn't personally work there in good conscience.


> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees

yea, maybe.. they _are_ cashing in billions in the meanwhile


I don’t think the existence of good people or what any one’s intentions are matter at all. I doubt anyone can change the course or Facebook. The stock price runs the show.


Perhaps we should help Zuck a little, and ban advertising in any social media application. That should at least set the incentives straight.


The disconnect is that Facebook is coming at this with the assumption that it is right and proper for Facebook to exist. The rest of us don’t make that assumption. So “how can Facebook best serve kids” might be “withdraw from routing tables permanently” but that isn’t on the whiteboard in Zuck’s office.


All I can suggest is look at the actual data and not the reporting on the data.


FB seems guilty, only because their internal findings were leaked.

I have no empathy for them. They bring out the worst in Humanity. They build walled silos of festering hate and anger, all driven by "user engagement", "hours on site" and money.


I worked at FB and struggle with exactly this.


Its easy to reconcile.

Just remember Rationality is Bounded, ie there are problems chimps with 6 inch brains cant solve. Its the classic Jurassic Park story, where man says he can control anything. And then realizes he cant. By which time its too late.

This is why the road to hell is paved by "good people who have kids" with their good intentions.

FBs issues did not appear yday.

Like the endless war the issues where there right from the start. So why are we talking about it today? Cuz lots of good people didnt do anything, not because they arent good or skilled, because the problem is too complex for them.

This is where Bounded Rationality helps resolve issues. If the problem is too complex, pick a simpler problem.

This is hard for some chimps to do for various reason. So entertaining them is a recipe for disaster. Their narrative will always be- "people are good. People experienced World War 1. They know whats at stake. They lost family, friends, body parts. Many are great Heroes. Trust them. They know what they are doing". And still we got World War 2.

Why? Cause rationality, skill and experince doesnt matter for some problems. All the "good germans" from politicians, to religious leaders, to military and intelligence leaders knew Hitler had to go long before any notion of war entered the minds. Every coup and assassination they ploted they second guessed themselves. All of them ended up dead.


Another "statement", one not intended for the public

https://www.buzzfeednews.com/article/ryanmac/growth-at-any-c...

(Note: "Connecting people" here does not mean providing communications services, it means using behind-the-scenes, unconsented, and sometimes deceptive tactics to figure out whether and how people are connected to each other IRL.)

Andrew Bosworth June 18, 2016

The Ugly

We talk about the good and the bad of our work often. I want to talk about the ugly.

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn't something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don't win. The ones everyone use win.

I know a lot of people don't want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that's why we get to do that great work. We do have great products but we still wouldn't be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren't losing out on a bigger picture. But connecting people. That's our imperative. Because that's what we do. We connect people.


And he's recently been promoted to CTO.

When this statement leaked, he made the bullshit claim that he was playing devil's advocate. He certainly wasn't. This post was made at the same time as another leaked one about messenger adding a deceptive interstitial to get people to agree to share their number and contacts with FB.


The hard irony is that Facebook is just another mechanism to fragment people. It is no different than these other "borders, languages, and increasingly by different products".

It seems that the author is operating under the assumption that if everyone is inside of their product, the world won't be fragmented anymore. People will be connected.

Yes. They will be connected. To the product.

We can do better than this dreary future. It is possible to connect people as peers, without the exploiting hands of intermediaries like the executive who wrote this statement.


i want to downvote you so badly


Zuckerberg is making an almost entirely emotional appeal in his statement. Most of his claims are not backed up / buttressed with facts, numbers, and specifics. The statement is designed to make a reader feel bad for Facebook as if Facebook was a friend, and not a corporation with billions of dollars in quarterly profits.

Though the statement seems well-meaning, etc., it is weaselly and manipulative. It also conveniently doesn't address some of deeper issues from Frances Haugen's testimony.

For example, Haugen focused on the fact that Zuckerberg has created a relatively flat organization, where if decisions help the core metric they must be good, and vice versa. Haugen testified that Zuckerberg was made aware that instituting a newsfeed tweak would entail a) small ding to the core engagement metrics and b) would decrease violence in Ethiopia... He chose the metric over the decreased violence.

There comes a point where blindly pursuing metrics -- be it money or engagement -- without regard to the effects on society are hard if not impossible to distinguish from sociopathic behavior.

Also, let's not forget that researchers and renown statisticians employed by / sponsored by Big Tobacco (e.g. R.A. Fisher) convinced themselves that smoking didn't cause cancer. [0]

[0] https://pubmed.ncbi.nlm.nih.gov/2000852/


How so? He asserted several apparently factual claims that would basically undermine or make irrelevant most of the commentary in this thread, for example:

- Social media can't cause "polarization" because the measurements of that are going down in most of the world, except the USA. But social media is heavily used everywhere.

- It makes no sense to claim an organization doesn't care about X when it heavily funds research into X.

- If you react to a company researching the harms of its products by leaking everything and publicly accusing the company of being evil, other companies will simply not do research into the harms of its own products.

The second two are just logic. The first would benefit from a citation but I'll take his word for it.


Ah, great question. The pattern I'm pointing to is a little subtle. I think at this point it pays to be extremely sceptical of Zuckerberg.

(Quoting from Zuckerberg's original post)

> "And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?"

This certainly needs a citation. Ethnic conflict in Ethiopia, division in Britain around Brexit, and US polarization would seem to be obvious counter examples. It should certainly be said that correlation is not causation. Also, note that the claim that 'social media doesn't cause polarization anywhere / everywhere in the world' is a subtle bait and switch from "Facebook causes polarization in certain areas" or "Facebook's lack of robust, well-staffed safety mechanisms allow it to be exploited to cause polarization in certain areas."

> "If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?"

There is more than one way to care about a research program, but absolute amount of budget spent on X is not the same thing as relative budget priority. For a company that made 54 billion in profit last quarter, it'd be more surprising if they had no research program. Zuckerberg does not present any specifics here -- what percentage of gross revenue is the research program? How many people are employed to fight content, and how does this compare to how many people are employed to encourage growth? And what's the point of research if the results are not acted upon? The whistleblower was pretty clear that the research doesn't matter even if suggested fixes will cause a <1% point hit to the core engagement metrics. Does Zuckerberg have any specific facts about how many times civic integrity / safety based suggestions was prioritized over the core metrics, other than the one example he cites (Meaningful Social Interactions)?

Speaking of Meaningful Social Interactions (MSI), the whistleblower specifically said that there is a foundational problem with how MSI is defined, because it includes the number of comments a post receives. Even without intending it, it is easy to see that controversial posts will attract more attention. Zuckerberg cites no evidence about the relative percentage of comments that are angry vs other emotions, and how this has changed.

> "That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you."

Zuckerberg is complaining about incentive problems? The whistleblower has said that Facebook's very policies make it "not care" even if individuals do. This is also what comes across in the WSJ articles. In other words: the narrative isn't false, and this has been documented. His point about the specific incentive problem of leaked research is interesting, but it's a case of an abstract concern (for other company's research) vs. very real and well documented harm Facebook is a) doing now, and b) per the whistleblower, is unequipped to solve alone.

Also, at one point in Zuckerberg's missive, he shifts the locus of responsibility from Facebook to Congress: "..at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services?" Deciding what the "right" age is can take several forms. A panel of seasoned jurists, child psychologists, policy experts, etc., can spend a long time debating what the "right" age is in the universal sense. Or, Facebook could take stand, err on the side of caution, and say that 17 is a better age than 13, and detail why they think so.


I'm British. I don't think anyone in the UK has tried to argue that disagreements over Brexit are caused by Facebook. Actually the whole idea would sound kind of absurd. People disagreed over Brexit because:

1. Some people disagree fundamentally over the nature of government and how power should function.

2. Some people were afraid of various kinds of "punishment" or instability that they were told leaving would cause, even if they would have supported it in the abstract.

Neither of these have anything to do with messaging apps or social media. As for ethnic conflict in Ethiopia - ?!?! - seriously?! That part of Africa has been a hotbed of bloody tribal conflict for my entire life. It's driven by the local culture, I seriously doubt anyone there gives the tiniest shit what people post on Facebook.

This is Mark's point. It's not a bait and switch to point out fundamental inconsistencies between other people's theories and the wider world. The idea that Facebook is some unique social evil that causes people to disagree just looks very odd from outside the USA, looking in. It's being made a scapegoat for US social problems. Everywhere else when people fight, they are well aware what they're fighting about and why.

Re: research. You seem to be arguing that yes, they spend a lot of money on this issue but it's not enough, whilst also admitting you don't know how much they spend. You're just convinced it's too low. But this is meaningless: research programs have natural costs and you can't simply double a budget and get ... what? Conclusions that are twice as "good"? Same conclusions twice as fast? It doesn't work like that.

Nor is research guaranteed to result in actionable outcomes. Look at their conclusions around Instagram. Some teenage girls said it made them feel worse, but more said it made them feel better. What's the actionable outcome here? Unless there was an incredibly specific kind of thing the girls who felt worse were seeing, there probably isn't any plausible action, and if there was some sort of specific content that made people feel bad, removing it would just be used as further proof of their guilt: they're manipulating the feed to increase engagement!

The rest of this thread is all like that. You start with take that is itself controversial and extreme, like "people talking about controversial topics is inherently bad and Facebook should suppress it". Then when Facebook pushes back and points out that actually, lots of people like talking to each other, including about politics, they are cast as villains.

This has all the trappings of a purity spiral. No matter how much effort Facebook makes, it's never considered to be enough. Activists who aren't quite sure what they're trying to fight or why, insist on ever more moderation in the hope that somehow this will cause other Americans to all start agreeing with them. The result is stuff like XCheck, an unstable downward spiral in which ever more aggressive moderation policies force ever more people to be exempted from them, lest the incoherency becomes too obvious.


Thanks for your comment. You make some good points. Zuckerberg's comment about the incentives of leaking research is certainly worthy of consideration. And while I don't have first hand experience with Brexit, I do not mean to claim that the disagreements were caused by FB. Only that FB may have had a role in causing people to become more entrenched in their positions.

One of the points I'm making is that Zuckerberg's statement lacks specifics in the form of numbers and data. I think it'd be interesting to read a point-by-point rhetorical analysis of his statement.

Also, because of this, yes, I don't know how much Facebook spends on research. I agree that though money and research quality are quite likely correlated, it's very hard to say by how much. That being said, I care a whole lot more about the values of the company. Haugen's testimony paints a textbook picture of a values problem. The whistleblower has repeatedly said, under oath, that Facebook understaffs its security and safety teams, and that they turned off the safety and integrity protections after the election, and more.

It's also true that civic divisions in the US -- not to mention other social problems -- run much deeper than Facebook. One mechanism people like me are concerned about is how users are recommended people to follow or content that results in either more division or them being led to a more extreme version of their views. In her testimony, Haugen gave the example of how indicating an interest in healthier eating on IG can lead recommendations of anorexia / eating disorder content. Saying that Facebook's engagement-based-ranking has nothing to do with promoting civil divisions seems to me like saying that the Youtube recommendation algorithm a few years back had nothing to do with the rise of the modern flat earth movement. Researchers have evidence that it did [0].

As for ethnic conflict in Ethiopia, I only bring it up because of Haugen's testimony. As this Guardian article puts it, "Haugen warned that Facebook was 'literally fanning ethnic violence' in places such as Ethiopia because it was not policing its service adequately outside the US." [1]. Your comment does make me wonder how many people in Ethiopia have access to the internet though.

This is a slight tangent, but it's also worth mentioning that re: IG and mental health... we don't know about other research, like about any further attempts at a causal study -- most of what's been cited is correlational and comes from small sample-sized interviews. So it would be nice to see larger and more rigorous studies. I don't believe that research should stop with question "Is Instagram Harmful." Of course that's going to have a mixed answer when dealing with large masses of people. "Who is susceptible to being harmed?", "By what mechanisms is IG harmful to some people?" etc. are questions that need answers.

I also disagree that people are so biased against FB/IG that anything they do will be seen in a bad light. Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud. And though I am not an activist, I'm generally interested in (the enabling of) wholesome discussions and interactions, i.e. things that promote a feeling of being in a community / society rather than feeling apart from it.

[0] https://www.theguardian.com/science/2019/feb/17/study-blames... [1] https://www.theguardian.com/technology/2021/oct/07/facebooks...


I think part of the disagreement here is you see a whistleblower, but I see an activist. One who frankly, if I were Zuck, I would have fired or simply never hired in the first place.

Arguing that Facebook causes tribal conflict in Ethiopia by not "policing aggressively enough" or "understaffing" teams is not, to me, the argument of a whistleblower. It's the argument of someone who has totally lost perspective, of a totalitarian who believes that any and all of humanities ills can be fixed by manipulating communication platforms. It's no different to saying "if the phone company cuts off any phone call in which people are arguing, there will be no more arguments and everyone will be happy". When phrased in terms of slightly older-gen tech it is obviously absurd.

"Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud"

Good on you for being consistent then! Sadly it seems to be very rare. Look at Zuck's post. He points out that Facebook did in fact make changes to prioritize stories from friends and family, even though that reduced their income and reduced the amount people used the site i.e. a lot of users were actually people who don't care much about their cousin's cat pictures, but do care a lot about civics, or phrased another way, "divisive politics".

Yet it doesn't seem to have done them any good. For people like Haugen and a depressing number of HN posters it's not enough to re-rank nice safe family stories about new babies. For them Facebook also has to solve teenage depression, war in Africa and probably world hunger whilst they're at it. And if they aren't it's because they're "under-staffing" or refusing to "adequately police" things.


My perception is that people aren't expecting facebook to solve teenage depression, but to prevent themselves from contributing to it if they are. FB's research has been criticized by scientists as being of poor quality [0], and Zuckerberg claims the findings were cherry picked. This actually good news for FB if true. Should they partner with neutral, third party university research teams, as well as commit to a transparent investigation, they'll be able to clear things up. Not everyone would agree, but I believe that many people are capable of changing their minds when presented with new evidence.

The metaphor of a phone company cutting off an argument is an interesting one. I agree that people arguing is a fact of society / nature, and I also agree that cutting off a phone call seems like an absurd way to try and solve the larger problem. But at the same time, I don't think the metaphor fully applies, for the following reasons:

First, a phone call is a one-to-one communication, and Facebook is one-to-many. It's rare if not unheard of for strangers to call each other and say what they think about, for example, a NYT article. Second, there is no recommendation system pushing "engaging" subjects, where engagement can be defined in terms of how controversial it is. Third, only 9% of FB users speak english, and Haugen testified that the non-english safety features, tweaks to the ranking algorithm, and tooling are not as good (potentially drastically worse?) in non-english languages.

Most people would argue that phone companies have some responsibility to prevent spam calls, similar to how an email services prevent or flag spam emails. These are network level actions, and a lot of Haugen's testimony was about how FB was being irresponsible in this regard.

[0] https://unherd.com/2021/09/facebooks-bad-science/


"Ethiopia violence: Facebook to blame, says runner Gebrselassie" This is the headline from BBC in 2019. It makes me so angry and upset. If facebook was run ethically how much smaller would it really be? 10%? 20%? I can't help think that although they would lose some customers they would also gain others.


Roger McNamee warned us.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: