> I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?
I worked at FB (not for very long) but you can trace everything back to the awful performance process they have (the infamous PSC). At the end of the day, hard to measure stuff doesn't get you promoted while tangibly moving metrics does. If you incentivize people that way, it doesn't take anyone WILLINGLY doing anything evil to end up with a pretty evil thing on your hand.
If you optimize for profits and only that, you always end up selling crack cause it's the best business in the world, and that's why it's illegal.
I agree their incentive structure is at the root of this. But this is an incentive structure designed by one group of conscious actors and then followed by another group. A bunch of people choose this. And given the many years of public critique of Facebook, they can hardly be unaware of what they're choosing.
The truth is that almost anybody could sell crack. Most of us choose not to.
Too many problems to care about and not enough time? That's the human condition. What defines us is the choices we make, the priorities they set.
I can't know what they felt when they made those choices. But I can see the choices and the outcomes. I get there's some theoretical difference between willfully fucking people over to get rich and being so blinded by eagerness to get rich that you fuck people over as a side effect. But either way they worked very hard to get positions of power that affected millions and then were indifferent to the harm they caused, so it's not like this happened by accident.
Abuse is measurable in all sorts of ways. The most clear one is having experts take a look at a random sample of users and see if they're being abused. You can back that up with interviews to look for both their take on what's happening and a variety of trauma markers. And there are all sorts of other measures that correlate.
But if there somehow weren't ways to measure it? Then they would have created a product where they couldn't even tell that they were harming people. That right there is something that shouldn't exist.
Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.
Even worse, preventing abuse and other social media ills often lessens engagement, and you know what that means.
>Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.
Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?