This all seems to have gotten utterly out of hand and gone a bit crazy.
I stopped using Facebook when my friends started to express political views more than social content. I wanted a place to hang out with friends, not rage. People don't do that in real life. Not that much, anyway.
Somehow it seems like over the next five years or so it all descended into madness and everyone became a politician.
What went wrong?
How can Facebook even be used to 'spread a narrative' if it's just my friends? Why are my friends posting this nonsense rather than stuff about their lives?
Something just doesn't add up to me. I don't understand what other people want from a social platform.
The same seems to apply to Twitter. Everyday, normal people, seem to be spending their lives raging at each other. What gives?
I want a pint now. Damn. The real world is stranger than fiction.
I think the relatively simple answer is that it turns out most people are far more subject to simple tribalism than we previously wanted to admit. Memes are incredibly powerful mind viruses and we (collectively) have done a really great job of weaponizing them over the past 30 years.
I don't think this is Facebook's "fault" in isolation since we see the same trend across most social networks, but I do think it suggests that social networks ought to be regulated, if for no other reason than that it's pretty clear that the default behaviors we're already seeing emerge en masse are not good for society or for us on an individual basis.
Sasse's new book actually looks at this. I was reading the description of it today, and it basically said that the collapse of local communities, combined with the rise of easy sharing, has essentially made political parties the new "tribes" in America. Seems like an interesting thesis, even if I don't agree with him on other things.
at least books require one to read and maybe lets information hit the frontal cortex. memes are so basal, they can completely defame someone or prove a certain point by completely by passing rationality, they really are far more dangerous in modern society.
They referred to books weaponized in the last century. As far as that goes, The Bible and Uncle Tom's Cabin, sure. But "Das Kapital"? Was that even widely read? How was it weaponized, e.g. did people refer to it, even though nobody had read the book?
As for a book causing change "by itself":
One thing about the Bible is that the content itself is nothing compared to the claim that it's the word of God. It could have completely different content, as long as there were people seriously devoted to it being the word of God, it would have an effect. That's why it usually it involves missionaries or being raised into it. Is it even possible to encounter that book "by itself"? And it didn't exist for a few centuries when the things it supposedly causes already were done.
A book breaking a taboo for example, and that in turn faciliting social change that was a long way coming, is not the same as a "book causing social change by itself". I don't know enough about it and the context, but it says on the Wikipedia page about Uncle Tom's Cabin:
> Uncle Tom's Cabin first appeared as a 40-week serial in The National Era, an abolitionist periodical, starting with the June 5, 1851, issue. It was originally intended as a shorter narrative that would run for only a few weeks. Stowe expanded the story significantly, however, and it was instantly popular, such that several protests were sent to the Era office when she missed an issue. Because of the story's popularity, the publisher John P. Jewett contacted Stowe about turning the serial into a book.
So at the very least, there was something there already. I'm sure it was weaponized, but how did it cause social change? Did it cause people who loved slavery to now dislike slavery? Did it make formerly happy slaves unhappy? I honestly don't know, and since my asking for examples is taken as the claim "there are no examples", I would ask you to treat my questions as actual questions.
Das Kapital was written to prove an scientific underpinning for a labour movement that already existed, long after the Communist Manifesto. So here too I'd ask, what social change did it cause by itself?
To weaponize means to make something that wasn't a weapon before into a weapon.
The Protocols of the Elders of Zion were a fabrication for a very clear and narrow purpose, it was a weapon from the get go.
I would say the same about Mein Kampf, but nobody read that. The Nazis rose to power on Hitler's speeches and violence, not on this book which nobody really read. Yes, everybody had it, but that's about it.
Did the Sayings of Chairman Mao really cause anything that wasn't already caused by the jackboots that forced people to have and quote from it?
How was the The Communist Manifesto weaponized in the last century? Who read or quoted from it?
> Public pressure led to the passage of the Meat Inspection Act and the Pure Food and Drug Act; the latter established the Bureau of Chemistry (in 1930 renamed as the Food and Drug Administration). Sinclair rejected the legislation, which he considered an unjustified boon to large meat packers. The government (and taxpayers) would bear the costs of inspection, estimated at $30,000,000 annually. He complained about the public's misunderstanding of the point of his book in Cosmopolitan Magazine in October 1906 by saying, "I aimed at the public's heart, and by accident I hit it in the stomach."
So it was weaponized by large meat packers? Because other than that, I just see Roosevelt agreeing with some things -- which is hardly anything being "weaponized" -- and the public only caring about contaminated meat, at least in this WP article. Or are you referring to something not mentioned in it?
The constant raging is a consequence of the fact that outrage is much more viral than honest debate.
One of the things I like most about hacker news is the fact that its culture supports decent debate. A small fraction of this is from the thread format, but most of it is from the pre-existing culture of its users. If you want good social media, find a few sites that act as a nexus for the culture you love. Otherwise, you are lost in the sea of lowest-common-denominator media.
Really? I just see tribalism with a lot more bloviating here. The subject matter experts can occasionally be baited into posting, but I'm more accustomed to seeing the kinda-clued-in authoritatively misexplain the convenient narratives to the even-less-aware. Any attempt to tell someone obviously out of their depth to stuff it is against the rules here, so unless you already know a topic it's hard to tell when a conversation is arguing in circles.
> The same seems to apply to Twitter. Everyday, normal people, seem to be spending their lives raging at each other.
"seem"... one wonders whether the picture painters and squeakiest wheels have changed our perception of everyday normal people instead of those people actually changing. A perusing of the anti-big-web-tech headlines...um...I mean op-eds every week reveals clear direction/intent. Not to say these companies are infallible, but it sure becomes harder for people to recognize the difference between real, inflicted harm and perceived harm.
I think this is the more accurate reflection. The loudest rabble-rousers on either side are the only ones you hear because everyone else is just getting on with their normal lives. As it so happens, it's profitable to rabble-rouse if you're in the business of selling clicks, so media increasingly functions only to enrage and divide instead of to inform. Obviously this isn't true for all media but it's becoming more prevalent in our common discourse.
The same happens here, on HN. 2-3 years ago there were mostly articles about programming, computing and information technology (as originally intended), but now it mostly consists of mainstream news, politics and depressing stories about the end of the world.
(It's better than half year ago, though, now only 80% BBCs and nytimes, not 100%)
We had our nice little tech enclave. And then we discovered that, even if we didn't care about politics, politics was still being done to us - governments were making policies that we had to live under, no matter how libertarian we were. We found out that housing policy had real consequences for our lives. We found out stuff like that in non-tech area after non-tech area. And after being negatively affected by stuff that we wanted to ignore, some of us started paying attention to it - not because we wanted to, but because we felt like we had to.
--Internet increasingly makes it easy to give validation to people which is shallow. (i.e people who don't know you saying how great your comments, how great your life is, how great your political views are)
b)people crave emotion (good or bad)
--Due to the way these social media companies are structured, they only want user engagement, they don't really care about quality of engagement. So they purposefully design it to be as addictive as possible. This means shallow content seeps up foremost that generates strong emotional responses.
c) Politics, sports, things like these activities are very tribal, and so generate strong emotions. They also give people validation because people can attach themselves to a movement and feel part of a community and be validated by other members of that community. The more tribal these movements (us vs them), the more drama, the more excitement for the user. I would argue social media companies might actually be worse off by not taking advantage of these trends (fiduciary wise) because for sure some other company will.
>I stopped using Facebook when my friends started to express political views more than social content.
Nearly every argument I hear that says "facebook newsfeed sucks" boils down to "the people you choose to follow share crappy shit." Unfollow them. Facebook is a tool, it works based on how YOU choose to configure it. If you follow the sites you like, instead of the sites and people you dont, facebook does work. You can hide sources you dont like. Im kind of rubbed the wrong way by "this thing doesnt work the way I want out of the box, but I also dont want it thinking for me."
Facebook IS to blame for creating a tool, getting people to use it, getting people to like things, and THEN changing the rules for how those likes behave (becoming a folllow button.)
Try creating a list, liking 100-200 quality sources, adding those sources to the list, unliking/unfollow them, and bookmark your list as your main facebook bookmark.
> Nearly every argument I hear that says "facebook newsfeed sucks" boils down to "the people you choose to follow share crappy shit." Unfollow them.
Here is the failure of Facebook, right here.
Facebook says that their mission is to connect people, but they've created a platform where all the incentives are for people to get into arguments and then disconnect from each other to stop the arguments.
In general, people can disagree about things and still have productive, friendly relationships. But it's harder (not easier) to do that on Facebook than it is in real life... that's a major product failure IMO.
the failure is mixing news and photos. people subscribed to each other to see their status/away message and vacation/baby photos. then sharing/reposting "news" to your timeline became easier. its easier to press share than it is to create content. by virtue of "the time it takes to propogate something vs make something" sharing became the default, and by volume, most common behavior on facebook.
I use facebook to follow news sources, like an ai driven feed reader, and I use messenger to stay connected to people. I dont stay connected to people by watching what they post to their timeline.
In my post you replied to, I said it WAS a product failure, that they moved the goal post of what a like/wall/share was. If they had built NEW features instead of adapting old ones, this situation would be different. Instead they reappropriated the wall as the "share this article" space, and the like as the follow. Facebook f'd up by overlaying a twitter clone onto their existing product, instead of building it as an additional component/page/module/feed.
I'm not going to defend Facebook but one of the problems with media criticism of Facebook (and Google) is that Facebook (and Google) has crushed a lot of media. They have become vassals of Facebook (and Google) in order to harvest the clicks they desperately need. They have very little control of their advertising and distribution model now.
This has also driven Madison Ave. nuts. Anecdotal but friends I have working in that space have no lost love for Facebook (and Google).
My point is there is incentive for media to criticize Facebook and we just don't seem to see this disclosure.
> My point is there is incentive for media to criticize Facebook and we just don't seem to see this disclosure.
There's also hypocrisy afoot as these news organizations publish anti-FB opinions under the guise of their news organization domains which muddies fact and opinion much more than the print days. They leverage visitor trust to inject thought.
I agree but full disclosure on how they rely on Facebook may be in order. This gives the reader a fair chance to assess if the article or authors may have a bias in how they report the story.
For instance, could Facebook begin to offer these publishers and/or authors incentive to not publish these things in the future? Either with stick or carrot. Could that influence how and what they publish?
Speaking of full disclosure, are "old media" just jealous, or do "new media" not want to be scrutinized, in ways they cannot easily lead people away from, drown out with spam, or even outright delete?
> I agree but full disclosure on how they rely on Facebook may be in order.
They dislike Facebook because it cuts into our margins. That in and of itself doesn't have to make reporting inaccurate.
If anyone writes an article about cancer, should they also make known they have friends that died of cancer? Maybe cancer is getting a really bad rap because everybody who writes about it doesn't like it? Wouldn't research best be done by people who don't find it particularly horrible? And if you wanted to factual information about human traffickers, would you rather ask a detective on a task force dealing with them, a human trafficker, a victim, or a random person who has "no bias"?
It's not like the media always said/wrote mean things about Facebook and Google, is it? When I compare to techy people on the internet, it seems they came way late to all of the parties, and wrote a lot of gloating things, too.
Why the bias about when bias matters, when "old media" criticize "new media"? Wouldn't it make more sense to point out distortions or falsehoods if see any, and then say "that shows this publication/author/article is biased"?
So you want them to admit bias you assume they have. Okay, and then what? You already assume it anyway, so for you nothing would even change. What's the functionality here?
> For instance, could Facebook begin to offer these publishers and/or authors incentive to not publish these things in the future? Either with stick or carrot. Could that influence how and what they publish?
So what is any particular publication supposed to write? Can you give me a concrete example of what to add to this article so that "the reader has a fair chance to assess if the article or authors may have a bias in how they report the story"?
I mean, yes, Facebook could begin to offer things to them or others for that or other things. And yes, that could influence things. That depends on who they are, and what is offered, to do what. and anything "might" have a bias. That's just a given, I don't understand what could be added to every single article that would help here. "when we wrote X, it's actually Y in fact, but we don't like Facebook so we're not saying that"?
Option 3: I just want to see my friends and family on my feed. It's sometimes hard to tell what else is reliable or not, but I don't want Facebook curating my life.
If there were only friends and family on Facebook, there would be far fewer news articles shared. Get rid of Pages, and Facebook would be pleasant again. Of course, Facebook would also be broke.
There are a myriad of problems with this naive set of options.
1) Are you even capable of realizing what is reliable and what isn't?
2) Do you realize that Facebook and users of Facebook are manipulating what you see to encourage you to think a certain way?
Most people believe they follow option 1, when in reality we are all way more susceptible than we want to admit to misleading reports and falsities posting as real news.
It takes a lot of energy to dive into a report to tell if it's reliable and true, and in many ways it's impossible for most people to even know how to start. And if you don't even realize that misleading news exists, you are that much more susceptible to it. This is especially true today with how good we've gotten at image and video manipulation.
Facebook is trying to accomplish the literally impossible and has made the world a much worse place because of it.
To point 1, that's the wrong question. Is FB better at it than me? Maybe. Do I trust FB to do the job for me? Hard no.
To point 2, yes obviously I realize that, but they are my friends / groups I have chosen to follow so fair game. I'm not interested in FB getting between me and the content that I want to see online anymore than I am interested in them censoring books (that also have an agenda) that I decide to read.
I guess OP meant 1) to be chronological timeline vs 2) FB-curated. It's very easy to skew what is "normal" perception of society by downrating some opinions and sticking others at the top. Especially with topics where there's no objectively correct answer looking at the big picture. Curating newsfeed and the resulting hivemind bubbles is the highway to polarised society.
3. New Republic and the rest of the MSM decide what is reliable and how you think?
Isn't that what this is all about? The mainstream media and the power brokers behind the MSM wanting control of a major means of communication?
Isn't that why they attacked google and google news? Isn't that why they attacked reddit? Isn't that why they attacked youtube? Isn't that why they are attacking facebook.
I think most people want option 1 and were happy with 1. But the MSM is trying to force option 3.
That's probably much closer to the truth than any of the participants are willing to admit.
Facebook has always been the devil, using your friends and hacking your brain to keep you using more Facebook. It's only now that the media has starting waking up to the existential threat Facebook poses that suddenly they're some huge problem.
Note that while commentators all seem to agree that FB is a problem, the type of problem is always skewed to whatever audience that media outlet is more likely to sympathize with. That's not analysis. That's rallying the troops.
> Isn't that why they attacked google and google news? Isn't that why they attacked reddit? Isn't that why they attacked youtube? Isn't that why they are attacking facebook.
Also: don't forget Whatsapp, who're responsible for that people in backwards parts of Asia misuse their chat-app for getting together lynch mobs.
There should be an asterisk on "strangers" with a description that they aren't necessarily even real people since trending topics and news feeds are heavily manipulated by whoever is willing to pay the most for manufactured interactions.
I don't think anyone "decides" as such. It's just the incentives within the system promote whatever content people engage with most. As it turns out, negativity bias wins out.
I don't understand why people don't get this. "Hey we do X with the news feed algorithm and revenue is up 2.17%." That is how it works. It is up to humans to logically decide if this is moral or not. Algorithms just know numbers and morality is difficult to quantify.
these algorithms hold a mirror up to us, and we don't like what we see. Outrage bait is so prevalent because we are easily drawn to outrage so it generates more clicks/engagement/revenue. I think cognitive dissonance is a powerful factor in this - we don't believe we are capable of irrationality, immorality and naivety so when we see these things happening we say "some evil faction that isn't like me must be at play here!"
Said evil faction is taking the guise of Facebook and Google right now but it's the oldest rationalisation in the book.
You didn't say how Facebook relates. I don't doubt that Facebook has made the Myanmar genocide worse, but I also think they would be genocide-ing regardless.
It's almost like blaming the telephone if a group uses it to coordinate to commit a crime. I think facebook should really try to make themselves out to be a content neutral utility. They are just going to keep getting stuck in messes where they don't have good options if they try to control political speech on their platform.
But where can I post pictures of my baby so granny can see them? Sure I’ll have to wade through the Russian misinformation to do it and have my entire data file stolen by a shady right wing political entity but I refuse to send baby pics via email.
Add to that one of my pet peeves: Folks claiming that they can't quit Facebook because how should they stay in touch with their 600 friends and family members without relying on this truely evil company.
For the record: I "deleted" my account some 5 years ago, when it became obvious how despicable this company actually is.
How did you get that from the article? Seems like the point was Facebook wasn't trying to get rid of Russian agents on it's site as much as it was trying to bolster it's image.
I think most of us have experienced the Russian "trolls" in one way or another, and there is very strong evidence that they have been, and continue to be, sponsored by the Russian state.
Also, below is a recent article about professional Iranian trolls, but mentions that the same cybersecurity company has been tracking Russian disinformation campaigns for years. There are thousands of people in Russia, in Iran, in other countries who go into the office every day and earn their paycheck by trying to move opinion in the USA and elsewhere in a certain direction, or spread FUD.
> FireEye’s information operations analysis team was formed in 2016, when hacked emails from several political figures were beginning to appear on the site DCLeaks. “All through that period, we were tracking the Russian effort to influence U.S. elections,” Mr. Foster said. “Obviously, social media is a very important kind of medium by which these campaigns are undertaken.”
yeah absolutely those people exist, I'm not denying it. But they're likely not the ones you've interacted with on twitter. Do you think Russia pays people to post inflammatory content on twitter, or to advocate for the Russian perspective like RT do? My money's on the latter.
> Do you think Russia pays people to post inflammatory content on twitter, or to advocate for the Russian perspective like RT do?
People generally are far behind on what's public knowledge about the Russian internet propaganda efforts. Yes, that's exactly what they've been known to be doing, including interviews with former participants.
> The agency has employed fake accountsregistered on major social networks,[2]discussion boards, online newspaper sites, and video hosting services to promote the Kremlin's interests in domestic and foreign policy including Ukraine and the Middle Eastas well as attempting to influence the 2016 United States presidential election.
As of last known count, they had a $15million/year budget. But they've been reported to have grown significantly since the success of the US election.
my implication is that these agents do the latter rather than the former. I'm suggesting that you can divide the "twitter trolls" set into two - 4chan users who post inflammatory content, and Russian agents who aim to help bring politics that are of interest to the Russian government into the mainstream.
how on earth does it benefit Russia to pay people to perform the actions that 4chan people do? I can't imagine posting edgy memes and attempting to "trigger the libs" is a national marketing/propaganda/outreach strategy.
Because "trolls" is a horrible term for it. It's propaganda and sow discord.
Inflame the far-left and far-right, get people to distrust traditionally more objective sources of information.
Repeat lies on social media to reinforce desired narratives. Influence trending topics to direct discussion. All of this is fairly well known at this point.
There are even cases of "troll" accounts being accounted and coming clean and sharing their twitter engagement numbers for particularly effective "fake news" tweets before deciding their exposed account.
This isn't a few people in a basement it's a professionally run tech shop.
> how on earth does it benefit Russia to pay people to perform the actions that 4chan people do? I can't imagine posting edgy memes and attempting to "trigger the libs" is a national marketing/propaganda/outreach strategy.
Modern day Russia propaganda isn't about trying to convince Americans to take a Russia's POV or have a favorable view of it. It's about sowing generalized discord, mistrust, and internal conflict to weaken the US as a whole. Anything that inflames divisions is useful to that goal, even "trigger the libs" memeing.
if that's the case then surely Russia has agents masquerading as the far left too? After all if you're just pretending to be far right to enrage the left you're leaving half your strategy at the door.
> if that's the case then surely Russia has agents masquerading as the far left too? After all if you're just pretending to be far right to enrage the left you're leaving half your strategy at the door.
If you weren't aware, Russia did have its agents masquerading as the far left too:
> if that's the case then surely Russia has agents masquerading as the far left too?
Yes; that's well established (while it's less talked about, the 2016 operation included promotion of Stein and Sanders as well as the more intense promotion of Trump, and also included false-flag promotion of in-person left-wing and right-wing demonstrations at the same time and place with the intent of fomenting political violence.)
Russia’s government also had connections to groups pushing “Calexit” after the election, including those pushing it as a defense against Trump.
Russia’s government also has connections, and has overtly funded, groups seeking restoration of the Hawaiian monarchy.
see, this is more interesting. I'm British so while I followed the election itself fairly closely I'd only really heard the mainstream narrative on the Russian bots, that they were there to get Trump elected because Trump was buddies with Putin. The idea of them fomenting chaos on both sides makes a lot more sense.
I'm not saying I think the random jerk on Twitter is a Russian troll. Though random jerks on Twitter don't hesitate to label me as one. The reason I put "troll" in quotes is because in this sense it is a generic term for these professional spreaders of disinformation. They may be very pleasant, they may act like jerks, doesn't matter.
I'm not going to get into how I think they operate, but clearly they seem to be getting the results they want as they continue to spend their limited resources funding it.
I've seen people I suspect to be actual Russian "trolls" on twitter too. Generally they're profiles of very generic heartland Americans who post basically nothing about their lives but occasionally throw out support for one political cause or another. There's enough accounts like that that are almost exactly the same for me to think they're fake accounts. I haven't really seen the same on facebook at all, to the point where I'm skeptical that facebook would be a sensible platform at all for astroturfing politics. Most of the nonsense I see on facebook is from clickbait farms. I'm open to the idea that I've just missed the facebook side though.
I'm no fan of facebook, and obviously there's a lot we need to examine regarding social media impact on society at large, etc, but I wonder to what extent this recent demonisation is a result of the existing press powers flexing their muscles. For example, https://www.cnbc.com/2018/02/12/facebook-rupert-murdoch-thre...
Serious question, is someone bombing facebook in the press for some reason. I have seen no less four discussions bashing facebook on HN in the last day. Not defending or attacking FB in anyway, but the sheer coincidence warrants suspision.
I like how you can tell the specific kind of criticism that a MSM article on FB will contain by looking at the unflattering stock photo of Zuck appears in the masthead. Here, the photo says "incompetent bureaucrat".
This sort of attack is nothing new. Did FB have any obligation to do anything other than follow the law? No. There's no betrayal here.
You have to keep in mind that outlets like The New Republic are dying, paying writers peanuts, and attracting a contributor group that has the same sort of uniformly authoritarian and strident political philosophy that's damaged the freedom of expression on college campuses and in larger tech companies. They see disagreement with their worldview as morally repugnant and they demand that institutions use their power to censor this disagreement and punish the critics. It's a philosophy I reject.
To be smeared by The New Republic as "betray[ing] America" is kind of an honor.
I stopped using Facebook when my friends started to express political views more than social content. I wanted a place to hang out with friends, not rage. People don't do that in real life. Not that much, anyway.
Somehow it seems like over the next five years or so it all descended into madness and everyone became a politician.
What went wrong?
How can Facebook even be used to 'spread a narrative' if it's just my friends? Why are my friends posting this nonsense rather than stuff about their lives?
Something just doesn't add up to me. I don't understand what other people want from a social platform.
The same seems to apply to Twitter. Everyday, normal people, seem to be spending their lives raging at each other. What gives?
I want a pint now. Damn. The real world is stranger than fiction.