Our world-destroying-AI paperclip maximizer is here. It's called a "news feed".
The thing that gets the most clicks is outrage, as the AI has discovered. We're setting people against each-other in a more and more efficient fashion.
The result has been clear since the Arab Spring. Good things don't come from helping people hate each-other in the most efficient way possible.
Banning AI-driven-click-maximizing news feeds would be a healthy start. Right now, they're doing serious damage to our world.
I've used YouTube for many hours per day for several years. I've almost never seen a single thing appear on my home page that was polarizing or even clickbaity. The very rare times I do (always after I watch something that's kind of adjacent), I just click "Not interested" and I never see it or anything like it again. It's done a pretty good job of predicting what I would and wouldn't be interested in.
Same with Twitter. I just unfollow anyone I find tweeting polarizing or charged things. My Twitter feed looks pretty close to the HN front page.
Many people crave these things, whether they want to or even realize it. I think it's going to be this way for a very long time.
I'll watch that film, though.
And IME, it doesn’t even have to be videos watched by most of the people, but a subset of “power users” who binge certain kind of videos are over represented on the recommendation engine.
For me it takes a single round of a few "Not interested" clicks. Definitely not weeks, or even days.
For a self-contained example, spam in large Facebook groups always rises to the top, because many people comment asking it to be deleted, causing the algorithm to show it to more people, some of which comment, until a moderator finally kills the post.
These kinds of side effects do not happen in a voting based system or a straight chronological feed.
No, it's the content that other people engage with. Disregarding the whole "engagement is a good metric of how much you want to see something" bullshit, if you served me food based on what other people like to eat, it would be a weird mix of gluten free vegan stuff, and also coke, pizza and doritos.
I don't want any of that shit. I'm not "many people". I don't need to be fed the same irrelevant garbage as them. But the only way to achieve that is to unfollow everyone and not get many useful updates. Which is what I'm doing, but it's just barely useful, and the popular crap still seeps in at every opportunity.
I do want to see what other videos are watched by large numbers of people who've watched things I've already watched. That's how I discover new and interesting things. I've found tons of great content that way, and pretty much no clickbaity or LCD / popular crap seeps in (maybe once every few months, but the "Not interested" immediately takes care of it). I don't know how common my experience is.
"This talk is about metrics and measurement: about how metrics affect the structure of organizations and societies, how they change the economy, how we’re doing them wrong, and how we could do them right."
(In Lonnie Athens' theory of violentisation, there's a step where the to-become-radically-violent says to themselves, "I'll become the biggest badass and this will never happen to me again." Milošević has a famous TV speech where, after a spot of "spontaneous violence", he tells ethnic serbs he isn't going to let them be beaten anymore.)
These minor grievances are later amplified by demagogues. The reaction to this perceived victimisation is so out of proportion that the initial grievance looks minuscule in retrospect.
The main difference these days is that we don't need a visible demagogue. The various "engagement algorithms" play the role of an echo chamber which sends opinions into a positive feedback loop until it enters the regime of social collapse.
You could also argue that the original rise of Fascism was enabled by mass propaganda becoming possible where it wasn't really previously, but that's so overdetermined it's hard to attribute to particular causes.
Then there's the original SARS of bad memetic propaganda, the Protocols of the Elders of Zion. Piece of leafleting from the early 1900s that's contributed to the deaths of millions.
"Just tell people that racism is bad"
Somehow this seems to make the problem worse not better, because if you're one of the outsiders you just get added to their mental list of conspirators. And if you've commented under your real name you're at risk of reprisals.
Effective debunking is hard.
Oh, and algorithmic social media makes this worse - your act of commenting on a post tends to raise its profile and cause it to be shown to more of your followers.
> "Just tell people that racism is bad"
What do you think "inciting ethnic hatred" means? Telling people that ethnic hatred is good? So that the only possible reply is that actually ethnic hatred is bad? No it involves conspiracy theories about an ethnic group, accusations against them, in short, empirical statements that can be refuted.
There is no statement of fact so definitive that it can overcome a vast barrage of inflammatory conspiracy theory. If that's done in front of a crowd looking for a simple solution to their problems, they'll pay no attention to refutation of "short, empirical statements".
Still an improvement compared to "I heard X", which is the default with radio and books.
Obviously that doesn't always work, but social media makes it effectively impossible. Not only do you have official sources of propaganda, you live in a swamp of anecdotes that are impossible to refute or contextualize. The radio says "Group X is bad", and you yourself may never have had any trouble with X, but a flood of "X did [bad-thing] to me" stories on social media can make you think that you've got personal exposure to it.
Ultimately, I believe the problem rests with the individuals who are willing to be manipulated. The human organism has a massive security hole that has been very well exploited, and I've got no idea how to patch it. No amount of coherent, cogent argument will change the mind of somebody bent on having a target to hate, and people have gotten very good at insinuating those targets.
I don't see how you can believe these things at the same time. If these efforts to induce ignorance are indeed so well-funded and concerted, how can bottlenecks be an obstacle? Indeed why couldn't these well-funded, concerted efforts use those bottlenecks to cut off people disagreeing with them?
Of course if the money can shut you down entirely, you can't succeed. But it's hard to completely control radio and books. It can be done, but only by making your totalitarianism clear. It's more effective if people think they came to their conclusions on their own, and have been exposed to "all sides".
Today, they can bolster their organized propaganda with a sufficiently effective astroturf campaign (magnified by social media). The two provide confirmation for each other, and appear to be independent. They may even actually be independent; they don't need to formally coordinate if they roughly agree on the ends.
So people believe they're getting "all sides" of the story, and there's no need to shut down disagreement. Instead, people hear the message from two sources that affirm each other, and disregard any disagreement willingly.
also, the remedies chapter of: https://drive.google.com/file/d/0BxxylK6fR81rckQxWi1hVFFRUDg...
still looking for more in this vein.
from HN, the discussions on https://theintercept.com/2016/09/07/google-program-to-deradi... suggest attempts to de-hate might not be uniformly well-received.
Genocides and other war crimes have been a staple of humanity since Ancient Greece. Authoritarianism, of which fascism is a subset, only depends on one thing, and that is masses willing to be led.
Saying “x has always existed” is different from the scale at which x is present. That is, in fact, one way that we measure progress. Inarguably, we will never be able to stamp out our baser instincts and behavior, but we should strive to reduce their presence and impact.
If you just broaden your definition of AI a little to include the collective decision-making processes of a market-based corporate-controlled press and political system, this has been going on for decades, if not centuries. It's only recently that computers got added to the mix and inevitably bubbled to the top of the heap. Or, I should say, helped bubble some people to the top of the heap.
Maybe we should rename "personalization" to something with a -ve connotation - perhaps "bubblification"? "narcissization"? "comfortzoned"?
I personally was fine curating my feed and only friending people I wanted to follow, but that's just not how it came to work socially and culturally.
Seriously though, their choices to run the platform the way they have were fundamentally shaped by profit and the stock market. The type of corporate moderation you’re suggesting doesn’t exist.
I thought they just made a modest amount of money.
However I believe their benefit to society is significantly higher than fb just because of R number 2. (of "reduce, reuse, recycle")
I think Craigslist benefits from a transparent business model that doesn’t rely on capturing and manipulating consumer data. They charge for job ads, which is a pretty straightforward model. Contrast that with the miasmic, tailored advertising business that Facebook has created.
Craigslist isn’t necessarily leaving money on the table, they just operate a more transparent business that’s more closely aligned with the public good. It’s still not realistic to expect companies to not try to maximize their earnings potential, that’s like asking your favorite football team to score fewer touchdowns. Facebook as a business is fundamentally misaligned with the public good.
Wikipedia and Craigslist are great examples. Craigslist is for-profit, and Wikipedia was originally a project of Bomis, a "web portal" (those words meant things 20 years ago) that tried various things to be profitable and had a good amount of success with softcore pornography. Jimmy Wales was quoted as saying, "You know the press has this idea that I am a porn king. I really wasn't a king of anything, frankly, you know? Because at the time, when we looked at it, we were just like, 'Okay, well, this is what our customers will want, let's follow this.'" And even he, the guy peddling pictures of naked women on the internet because it's what his customers supposedly wanted, ended up spinning off Wikipedia as a non-profit and winding down Bomis.
When you decide to make a company that values profits above all else, it's a choice. You cannot attribute the moral impact of that decision to the invisible hand of the market - the invisible hand cannot reach out and grab things that weren't given to it.
Of course it wouldn't have sophisticated features like photo tagging and such, and probably wouldn't be Hip And Cool for Gen Z, but it could be a functional bare-bones Facebook replacement. You'd have probably have to disable virality features, and maybe linking to external news sites just to prevent your racist uncle from posting Breitbart links, I don't know.
Would that really be so hard? Or do the servers, hosting, security and moderation costs just scale exponentially after some threshold of say 10 million users or what have you? Supposedly Instagram was running with a very small team when Facebook acquired them
People want to be mad. They like it.
I have two Twitter profiles. One follows only makers, tinkerers, artists and educators. I mark "do not want to see more posts like these" if anyone posts something political.
I have another one that follows people with strong political views and the latest outrage.
Guess which one makes me feel better when I view it?
Guess which one I find myself viewing more often?
Maybe you only need to have some tiny fraction on board to make it worth the effort?
Link for the curious: https://friendi.ca/
Those same people are often the ones sending misinformation and creating the problems. How do you allow people to post updates and not allow them to spread misinformation that damages society?
To prevent this from happening, it has to be actively suppressed, or at least there needs to be something slowing it down so it dies off. A hands-off attitude isn’t going to do it.
"It's why I've seen priorities of escalations shoot up when others start threatening to go to the press, and why I was informed by a leader in my organization that my civic work was not impactful under the rationale that if the problems were meaningful they would have attracted attention, became a press fire, and convinced the company to devote more attention to the space," Zhang wrote.
That is a damage control role. Perhaps more tellingly, it highlights the entire organisation's priorities: if it isn't drawing press attention, ignore it. Of course that's not the phrase FB would use in a press release. They'd deploy a convenient euphemism, such as "dedicate the resources elsewhere".
At this point in time it will take an act of God to fix it, the lock-in is very strong and their ability to buy up anything that even begins to compete with them serves to cement that lock-in to the point where I don't see a way of ever dislodging it.
There is a key difference being being able to afford to buy anything, and actually being able to buy anything. The competing founders need to be willing to sell. If a competitor arises that is mission-driven, Facebook cannot buy them.
A direct threat to Facebook is unlikely to succeed. More likely is a competitor based in a niche where Facebook cannot enter for structural reasons that slowly out-competes Facebook on Facebook's turf.
One possible niche is the privacy-focused community. Facebook can't flip all of those users, ever. If something privacy-focused emerges that has low-maintenance utility for the broader world, Facebook's days are numbered.
Can anyone help me brainstorm what features would be compelling for a privacy focused social media/event planning/group discussion platform?
Facebook's network graph is its single most-valuable asset. Responsibly placing it back into the hands of its users will require some subtle care, both to avoid Facebook's ire and to avoid abuse.
Bonus points if you can do it with your spouse and children by your side.
But granted, mission-driven people are less common. To the point that many people who are not fundamentally mission-driven truly do not comprehend that some of us are. We all have our own motivations.
It's not morally sound to have an organization optimize for public image, but they are THE public image platform. Optimizing for social good would be so much better but that's really hard to track and quantify and it can be so divisive on some topics. Esp. now that trends marked "social good" are so quickly developed and redeveloped as other things.
Even if the rational reason for being slapdash is lack of resources and lack of focus, that means the company is not investing enough on addressing the underlying problems. Things happen, and sometimes you need to jump to put out a fire - but if putting out fires is a routine mode of work, it's a symptom of a much wider problem.
I am under no illusion that trying to protect against disinformation and propaganda would be easy. As you said, the volume is so vast that it's going to require a lot of effort and immense focus. Constant firefighting shifts the efforts, and deprives focus.
Whether that's intentional or an emerging property, the end result is the same.
The degree of nuance leads me to believe it's just impossible to do at commercial/global scale (though I would agree it could be done better than it currently is).
One person : one opinion seems a fair goal.
Posting all day about something as yourself is very different than running a network of accounts.
If they are managing to do so much harm through indifference and greed just imagine what they could do if they were explicitly trying to manipulate the world for "good"?
Powerful groups who try to optimize the world have a pretty poor track record compared to those who just try to do their own stuff right and let the world sort itself out.
So let's handle moderation in... let's say Saudi-Arabia, by having them silence LGBT and women's rights advocates?
And who is responsible for Saudi-Arabian citizens in exile? Saudi-Arabia? The country where they live in? The country where they are currently on vacation?
Hence the boycott of South Africa.
There was a brief window in which Twitter was allowed to spread dissent pretty freely in the Arab world. I've no idea what that looks like now.
Many of these people lack the experience required to see the forest for the trees and they draw similar conclusions to the ones in this memo. "There's no bad intent, we're just overworked and underresourced" (paraphrased) is something I've heard time and time again from people working on supposedly important problems at companies making money hand over fist.
They do a year a Facebook, get disillusioned, and then spend 2 minutes finding another job. Not exactly heartbreaking.
Stepping back, without a media circus, how really do you expect to change anything at organizations this large and powerful? Facebook transcends governments dude, Mark Zuckerberg has an unfathomable amount of money and power. At least give her some credit for putting out a non-conformist opinion.
Also, with regards to your specific points, everyone is qualified to determine political bots are bad. I can't believe you're going with the, "Well she is missing the nuance oh and she gets paid a lot of money so there!" take here.
> Also, with regards to your specific points, everyone is qualified to determine political bots are bad. I can't believe you're going with the, "Well she is missing the nuance oh and she gets paid a lot of money so there!" take here.
The point I was making in that second paragraph is:
- Company A is wildly profitable.
- Company A says publicly and internally that solving Problem Z is very important to them.
- The team tasked with solving Problem Z at Company A is understaffed, underfunded, and lacks support from leadership.
The "forest" being missed by the people working on Problem Z is the unfortunate reality that Company A does not actually care whether Problem Z is solved or not. In fact, to the extent that solving Problem Z might hurt the bottom line they probably prefer that it remain unsolved.
I use placeholder terms because while this particular story is about Facebook, this same thing plays out all the time at darling companies in Silicon Valley (and presumably outside, but I've less experience there).
I respect the memo author immensely. In similar situations I've just left quietly. The author probably should've done just that, now their name is directly linked to this story.
But they hid the name of the software engineer that spoke on her credibility? Something seems a little off, either on the source's side or on the distributor's side.
> “Facebook projects an image of strength and competence to the outside world that can lend itself to such theories, but the reality is that many of our actions are slapdash and haphazard accidents,” she wrote.
> “We simply didn’t care enough to stop them”
This is the key takeaway, IMO. Not as an excuse for Facebook, but as an indictment of "slapdash" information technology in general, particularly social media. It's becoming more and more clear that "bringing the world closer together" is a pandora's box, one that Facebook is not equipped (motivated?) to deal with the consequences of. Maybe no company ever could be. Maybe this is simply a thing that shouldn't exist.
This is the old “don’t tell me what you care about, show me your balance sheet and I’ll tell you what you care about”.
The scale of how the platform's being used for political manipulation in every country is enormous, and it's clear that if a junior data scientist is having to independently make these decisions, that there's little interest in proactively dealing with this.
But if random FB engineers actually can affect global political power, it is 100% certain that the intelligence services of the world are putting serious efforts into placing their own people in these positions!
"In her post, Zhang said she did not want it to go public for fear of disrupting Facebook’s efforts to prevent problems around the upcoming 2020 US presidential election, and due to concerns about her own safety. BuzzFeed News is publishing parts of her memo that are clearly in the public interest."
Anonymous sources are supposed to be used only in extreme circumstances. But these days that gets abused all the time.
The New York Times has published its rules for making a source anonymous, and they’re pretty good, IMO.
What are Facebook supposed to do? They could spend billions moderating every comment and like, but they'd piss off every politicians world wide and the users would all cry censorship (and that's if they get it perfectly correct). They could pick a side, but the same would apply with slightly fewer pissed off people. They could do nothing and save billions and piss off less people.
And in the background, a small number of people continue to manipulate everything you see in legacy media, and no one really cares because we're used to it. Seriously. What the fuck?
How about stopping the idiotic tool blaming and blaming the bad actors?
It's all in balance. Bad actors are responsible. But systems designers have a responsibility too. If you optimize a system for something (e.g., engagement), that's a reflection of your values. We're seeing the ugly consequences of Facebook's values.
If that means the business doesn't break even, maybe there should be no Facebook.
Is that really what you're asking for here?
But what’s the alternative?
If people want family & friends social media, where to go?
Aren’t the open/alternative platforms just as open to abusive, if not more so, as no-one like the whistleblower is even hired when it comes to open platforms?
On the surface the outrage seems misplaced. This seems like business as usual.
Perhaps the outrage isn't misplaced if the goal is regulatory capture and entrenchment of the social media space. Imagine a world where "fact-checking" and identity verification is mandated by regulators as a prerequisite to posting online. This wave of censorship will be buoyed by a tide of righteous indignation.
That's not just a tool for authoritarian regimes, that's pretty much the most used tool in any form of political conflict, in any country.
By doing so, it will also significantly harm these business models (and valuations) as we know it today.
The social media should be much limited by the physical aspect, for example, if you never met the person in a real-life event, you can't connect with it via social media.
Throw out the news feeds. Throw out the ads alltogether (because political propagandists will always find a way if you want to make the distinction).
What those grandstanding demagogue idiots want is about as relevant as all of the other non Section 320 parts of the CDA that got thrown out by the courts.
Coordinated inauthentic behavior describes exactly what it says on the tin. They try to limit coordinated inauthentic behavior even if the content is true and even if the users are "real" people (e.g. workers who are paid to click/promote/create content).
But to make you happy, I'll qualify my statement, and say that Facebook has been A-Ok with violent and frequently genocidal extremists organizing on its platform.
1) Facebook wittingly or unwittingly ignores political manipulation on its platform within the United States of America that demonstrably affects US political outcomes.
2) All necessary parts of the US government required to hold a corporation like Facebook accountable for 1) act in concert to do so.
3) The US mainstream media extensively reports on 1) and 2).
You really have to ask yourself what kind of place it is you're working for and what you're building, if a totally regular employee basically is paid hush-money to not speak about their job.
This isn't a private business any more, it's the mafia. People talk a lot about the culture of free speech and the rights of end-users, but we live in a world where a private company that builds a social media website, this isn't the NSA or anything, can stop an employee from speaking the truth.
It's time policy makers throw all of this out of the window, together with the anti-competitive non-competes that at this point affect IIRC, almost a fifth of the American workforce.
Is this a more rare event than I thought?
What you’re describing is the status quo termination agreement that companies require you sign in exchange for a severance payout. When people don’t have first-hand knowledge of unethical and dramatically harmful behavior by the company, and often even when they do, they sign it because they want the severance. What’s unusual in this case is not only did Sophie courageously speak up internally and now publicly, she selflessly forfeited her severance payout.
Without discrediting her desire for the greater good. It is a lot easier to turn this kind of money done when you consider compensation in these big tech companies. To most $64k sounds like a lot, for a big tech employee that could be a fraction of one stock vesting event. She worked there for over a year so she at least saw one vesting event and Facebooks stock has been explosive over last year. She very well could have net $100k or more in a single vesting event. I'm sure she has enough money to tide her over until her next job.
That being said, California, Oregon, Washington State, Georgia, getting 4-5 month's pay as severance? That doesn't even make sense from a business perspective, imo.
If you're small enough you are face to facing investors, everyone has experience with volatile employees. If you're larger, you don't care about a single player anyway. That's just my experience.
I've never seen it. Employment lawyers are very happy to take cases against companies, who have money. Just like non-competes, they have been dropped from contracts over the last few decades (severability applies anyway) on the west coast.
They sort of ARE the NSA
NSA CIA are part of the fabric of Silicon Valley and especially companies like
Amazon, Google, Facebook
CONTRAST with Apple where as long as Steve Jobs was CEO Apple flat out refused to join any of the 'Patriot surveillance and stop terrorim by spying on American citizens and the entire world' coalition
Facebook sent them specific memos about which violent images towards which politicians were not to be removed.
That particular set of facts escapes most coverage of this topic for obvious reasons.
These conversations about this topic will never be seen as sincere since they themselves are biased.
Again, not saying Facebook shouldn't be held accountable. But it's always easy from the outside looking in.
> Creepy Uncle Joe Biden is back and ready to take a hands-on approach to America's problems!
Sure enough, it's a spam site.
OK, you don't like the content. That's fine.
Did you get it in an unsolicited email, fax, text message, phone call, or other intrusive communication? Unless it did, it is not spam.
The video is worth watching. It's silent and short, probably animated GIF files, just showing actual stuff that has been on things like C-SPAN.
I retract my comment, I was unaware of the distinct nature of Buzzfeed news from Buzzfeed proper,
Also BuzzFeed News does real journalism, despite the BuzzFeed origin of their group.
From their Award page:
BuzzFeed News is a two-time finalist for the Pulitzer Prize: for a stunning probe that proved operatives with apparent ties to Vladimir Putin have engaged in a targeted killing campaign against his perceived enemies on British and American soil, and an exposé of a dispute-settlement process used by multinational corporations to undermine domestic regulations and gut environmental laws at the expense of poorer nations.
Our reporting has also won a George Polk Award, National Magazine Award, Livingston Award, Society of Editors Press Award, National Association of Black Journalists Award, National Association of Hispanic Journalists Award, Mirror Award, GLAAD Media Award, London Press Club Award....
>BuzzFeed News is not publishing Zhang’s full memo because it contains personal information. This story includes full excerpts when possible to provide appropriate context.