>Second, we will prevent “liked by” and “followed by” recommendations from people you don’t follow from showing up in your timeline and won’t send notifications for these Tweets. These recommendations can be a helpful way for people to see relevant conversations from outside of their network, but we are removing them because we don’t believe the “Like” button provides sufficient, thoughtful consideration prior to amplifying Tweets to people who don’t follow the author of the Tweet, or the relevant topic that the Tweet is about.
I want to applaud this, because it's clearly a good move. but i don't see how they can square "our recommendation engine is harmful enough that we need to disable it to protect the security of an election" with "we're gonna turn it back on in four weeks"
>>Second, we will prevent “liked by” and “followed by” recommendations from people you don’t follow from showing up in your timeline and won’t send notifications for these Tweets.
I don't know if a Twitter PM will read this, but showing Likes in my feed — and not having an option to turn it off in the official mobile app client — is one big reason I stopped consuming tweets.
It's a shame that the only way to opt out of seeing Likes is by using a third-party client.
This, and other timeline muddling they've done, are the reasons I use third-party clients like Tweetbot almost exclusively (other than when clicking a link opens the official app).
The thing that irritates me the most is that they also seem to track the last time you logged in via the Twitter website or official client, and show me "while you were gone" tweets that it thinks I might be interested in. Typically, they're right - so right, in fact, that I've often already replied to them.
I've always said (until Twitter went full asshole, which they're now reversing course on) that I'd pay for a "Twitter Pro". Let me use a third-party client to do everything that the official client does, but with the option of turning things off - provide likes in my timeline, which my client can filter out; allow me to access polls via the API (which presumably they don't do for everyone because of botting), and so on.
Twitter, I will pay you monthly to stop making me choose between an incomplete experience (third-party clients) and an awful one (inflexible first-party clients). It's guaranteed income. Please, sign me up.
I don’t understand why social media companies don’t offer a paid tier. They don’t have to charge everyone as they’d lose the vast majority of users, but for those who want to pay, why prevent them? They’d be making more money than they do from ads.
Wow, that sounds really annoying. I’ve used Tweetbot exclusively for years (I’m mostly a consumer, though — I very rarely tweet), and had no idea this was even a thing.
Twitter is really working hard to force people away from using their app.
You are right to say "for now" - the app will periodically force you back to "Home" view (after a seemingly random period) and then you have to manually switch back - I'm guessing Home view has better "engagement" metrics for the Twitter suits.
I believe they stopped doing that. The verbiage in the dropdown indicating that they'll switch back has disappeared, and my feed has been locked into the latest view for months now.
At least the Twitter client on Android lets you switch to the "Latest Tweets" timeline (via that "Star" icon in the top-right corner - same as on the web client) which doesn't have all the "engagement features" of the "Home" timeline (like showing "xxx liked"). IFIR twitter automatically switched back to the Home timeline after a little while, but eventually it seems to learn to stay on the Latest Tweets timeline.
This is my recurring issue when I see most "changes we're making for the election" posts. Sure this seems like a good step for this election.
Fake news and the destabilization it can bring isn't just an American phenomenon though. Are they going to apply these precautions to elections in Brazil, or Myanmar?
For anyone else still confused: CVS is an American retailer that typically issues receipts with additional advertising, bad low-value coupons with paragraphs of exclusions, on the bottom of a receipt. This makes a single-item purchase result in a receipt exceed 2m in length.
"This makes us money but harms the world as a side effect, but if we harm it too much the retaliation will cost us money. As a result we're going to only use it as much as we think we can get away with."
> I want to applaud this, because it's clearly a good move.
It’s not that clear to me that this is a net good move. Exposure to out-network content, depending on what is exposed, could be good for preventing echo chambers and confirmation bias too. Twitter likely went the other direction and promoted out-network tweets that only increased engagement, and blaming the feature itself for this is naive at best.
Second issue everyone seems to be missing is the potential partisan-ness of these feature changes. Do the usage of any individual feature distribute equally among all demographics? If not, feature-shaping a near election could be construed as an underhanded way to hamper the discourse of certain demographics. Of course this is very easy to dismiss as conspiratorial thinking, but I think ideally any intellectually honest discourse engine would have gone the extra mile to demonstrate the neutrality of their election specific changes.
That is just false equivalency. Timing and intent changes everything. If a feature was not purpose built but happened to be used more by a certain demographic, that is not a partisan move at all, but removing it right before a political event intentionally would definitely be.
To be clear, I am not saying this is necessarily the case, but not being in the know is problematic as far as health of democratic processes is concerned, especially when public discourse more and more depends on private enterprises. I don't want any fingers on the scale, because even if it might be favoring my side today a) that is not honorable b) there is no guarantee shoe won't be on the other foot tomorrow.
I find it suspect that they are giving as one of their criteria for calling the election as two authoritative news sources independently calling it. If this were non partisan they would have at least said a majority of authoritative sources or some other consensus.
Then you run into the problem of "What is authoritative?" By limiting it to just two, Twitter can say, "We like these guys. They're reputable." Instead of having to line up and vet thousands of other sources, and task people with monitoring all those thousands of authoritative sources around the world.
I’ve never understood the dewy beats Truman calling system. In the UK each constituency announces results after being counted (and recounted). Those results arrive during the night, polls close at 10, first results at 11, a good idea of who will win in most elections by 5, andthe last few results might stretch into 9 or 10 am if ferries are having issues.
But aside from a national exit poll at 10pm, there are no real guesses.
It seems that different US elections predict who will win a given state far earlier in the process - before the votes are counted even the first time. Is that right?
My own global broadcaster takes its “calls” from ABC. I think that abc, nbc, cbs, Fox, cnn, New York Times and Washington post are probably good enough for “calling it”. I was working election night 2012 in our Washington office and was shocked how early the “Obama wins” straps went up -I have a photo at just 23:19 eastern from fox saying Obama was re-elected. That’s something like 4 hours before polls close in Alaska and Hawaii!
If 4 of those networks and papers have called it then it’s likely good. Sure there’s buzzfeed and whatever that may have enough professional staff on to really judge it, but it’s unlikely that 3 networks and a major paper are all going to call Dewy rather than Truman.
> It seems that different US elections predict who will win a given state far earlier in the process - before the votes are counted even the first time. Is that right?
Yes.
At least in Dewey / Truman, they counted the votes and Truman won. Under the current system, Truman would be expected to concede gracefully after somebody predicted he would lose, removing the need to count the votes entirely.
I really disliked that "feature", I usually use the chronological mode to avoid seeing them.
If I wanted to see that content I would follow it, and if I wasn't and the content was good someone I follow might retweet it. And if I follow someone who retweets annoying things I can disable retweets on that person.
The chronological mode doesn't help either. That's the reason why I keep using half-broken third-party clients on mobile and Tweetdeck on desktop.
If I wanted to "discover" or "explore" something, I'd open the dedicated section, thank you very much. Modern social media is ridiculously user-hostile.
> we are removing them because we don’t believe the “Like” button provides sufficient, thoughtful consideration prior to amplifying Tweets to people who don’t follow the author of the Tweet, or the relevant topic that the Tweet is about
It's amusing that it took the election for them to finally admit this. I had thought much the same thing of Twitter's recommendations for a long time. People discussing divisive political issues shows up on my feed a lot, despite deliberate curation of who I follow, just because the people whose personal work I appreciate also happen to participate in political discussions in addition to posting their work, so the issues they "like" get posted on my feed. All of the posts they liked were from accounts I had no interest in following. Personally this change would cut down on the signal to noise ratio for me.
I am thinking they had to balance the improvement in their metrics for engagement they would gain by putting more novel out of network tweets in view, and the virtol and divisiveness the election-related media and misinformation propagating further would inevitably cause. Yes, there is not enough "sufficient, thoughtful consideration" put into the recommendations, but if they ultimately increase Twitter's user engagement and revenue then it takes a lot of effort to backtrack on them. It also makes me continue to believe that some kinds of information are detrimental in some ways, and Twitter is now controlling the flow of that information as to prevent public backlash or some other harm.
I’ve noticed that they’ve replaced this with surfacing tweets from your lists. This is really annoying to me as I’ve moved political follows to a politics list so that I’m not inundated with politics and have to explicitly check the list to see that content. Now I get it all the time.
Their recommendation engine is not harmful in all cases. I used to use Twitter to follow some machine learning/programming profiles and remember the recommendations being quite good.
in my experience, if anybody you follow ever interacts with anything political, that quickly drowns out any other interesting content in the recommendations. If you exclusively follow people who exclusively post and interact with programming content, then maybe your recs will be okay. but all it takes is one person in your network to interact with one other topic to poison your recommendations for good.
This is the amazing duality of social networking: as soon as it becomes topic specific, the quality of discussion improves in direct corellation to how technical the topic gets. The moment the conversation is just general babble, anonymous / pseudonymous people will find things to argue about.
I sadly don’t find this to be true. The caustic subjects in all disciplines get talked about more and recommended more. I made the mistake of watching some flight videos and now get recommended crash montages all the time. I watched the C++ con talk on “OO considered harmful” and now have an inordinate amount of bullshit in my feed. It feels kind of crazy that my YouTube recommendations took such a downturn suddenly.
Yup. I'd love to live inside my own bubble filter.
Does anyone do personalization well?
I worked on team that was trying to transition from recommendations to personalization (fashion retailer with ecommerce, think Macys Nordstrom etc).
Our imagined gold standard was the expert sales person, the personalized shopper. So somehow figure out how to make a digital sales person. Like StitchFix or Trunk Club. Curation at scale.
But I couldn't even figure out how to solve my own ultimate shopping challenge: recommend a quality white t-shirt that fits.
Though I doubt it, maybe StitchFix has cracked this nut. If they (or someone) has, we need to distill their magic and apply it more generally.
(I haven't tried Spotify or Apple Music. I've heard their personalization efforts are pretty good, so I'm semi-curious.)
--
Oh, and a post script, while I'm chewing on this topic.
I have a theory why quality personalization isn't likely to become the norm. I can't yet imagine how it'd displace today's biz models. Targeted advertising has sucked up all the oxygen.
It's hard to articulate the difference between personalization and targeted advertising (recommenders). One of those torturous endless discussions our teams couldn't escape.
"Personalization" is needs fulfillment whereas targeting is attention stealing. Personalization is much harder to monetize, because it relies on conversions. Whereas with targeting, the money changes hands before hand and is easier to pretend it's working.
I will clarify: I don't mean that _feeds_ are doing well with this. I just mean, when you dig down to a place where Real Discussion is happening, like a web1.0 message board or group or whatever, there's still tons of good, good-faith discussion happening. It's just not the stuff that percolates up to the top. Relying on your feed recommendations for your content is like living off of nothing but McDonalds, it's not good for you.
> when you dig down to a place where Real Discussion is happening, like a web1.0 message board or group or whatever, there's still tons of good, good-faith discussion happening
I mean topic specific in the sense of finding places where deep discussions are being had - like, say, a mailing list or forum for a very particular thing, scientific or programming or cars or whatever you like. Not what you see on Facebook. Social networking is not just FAANG, look in the long tail for the good stuff.
I don't think its that illogical to argue there's a time sensitive component. False news is the price you pay for breaking news sometimes. Its fine in the long run, but the argument is during an election the risk is too great.
Do you also find it hard to square the statement "large gatherings are dangerous enough for spreading diseases that we need to ban them for public safety" with "we're gonna have them again when the pandemic is over"?
This is the truth. However, bringing attention to it and shaming them over it is still worthwhile since, while their actions are in line with the desire of their customers, it's contrary to the desire of the majority of their user base.
Recommendation engines are neither inherently good or harmful -- think of them as an amplifier.
In normal times, they work pretty well -- it gives you more of the stuff you like, which usually isn't harmful. Just entertaining/fluff/interesting/whatever.
But closer to an election, they can be gamed and weaponized for harm -- Russia can activate botnets, fake accounts, etc. and lies spread like wildfire.
So it's not about whether the recommendation engine is good or bad -- it works. The point is that close to an election, bad actors weaponize it 1,000x more or more often than they do the rest of the time, so they're turning it off. When it's not being weaponized so much, they can keep it on.
It's like paying for a DDoS protection layer at times when you know your site is likely to be attacked, but not using it the rest of the time.
It seems hilariously naive to think that the only time people will be weaponising it is during an election and that there aren’t elections happening regularly all over the world so that if your point holds true such things should basically always be turned off.
They didn't say that though. They're saying the consequences are worse before an election. At other times the pros outweigh the cons. Not sure I agree but its a pretty reasonable argument, I think.
> But closer to an election, they can be gamed and weaponized for harm
But there's elections all the time. There's many different elections in the US alone, any many many more throughout the world.
And elections are just one thing, what about influence over opinions for bills/law/reforms? Wars? Public opinion about basically any subject, from economics to gender/race/class tensions?
If you accept that social media is weaponised for this election, it seems naive to not accept it is being weaponised absolutely all the time
By growing as large as they have, and by building automated systems to amplify content to mass audiences, they have acquired that role. It is unfortunate that their control over their responsibility is unilateral and undemocratic. But at their scale, if they chose not to try and assess the accuracy of information, but instead to blindly amplify it based on engagement metrics, that is also a political choice.
One possible option that never gets discussed is to nuke the amplification methods. If we stop recommending content automatically this ceases to be a problem.
As someone who's worked at a big social media company — no, that's not at all what consumers want. They want chronological feeds with zero garbage mixed in. It's okay to have a separate recommendations feed, some (not all!) people want to discover new things, but it's totally not okay to meddle with the main one, and it's nothing but mockery to give users no control over it. People also want their preferences respected, they certainly don't want them reset every now and then.
The only reason people keep using services like Twitter is because their network keeps them there.
Well I guess it depends on making a distinction between what consumers think they want, versus what they actually do.
Yes, people say they don't want recommendations, because 95% of them are irrelevant.
But then the 5% (or 2% or 0.5%) turn out to be super-relevant, and they find new people to follow that they love, and learn about things they love, and the experience in the end turns out to be a net positive.
Their actions show that it's valuable in the end. Otherwise the feature wouldn't exist at all. Recommendations aren't advertisements, sites don't make money off them -- sites use them because people genuinely find things that lead them to use the sites more.
I'm not denying the undeniable fact that some people sometimes want to discover new things. I'm just saying that it's absolutely possible to have it done in a respectful manner. No one, ever, under any circumstances, likes or wants to be manipulated, be it overtly or by having their subconscious played with — period. Adding non-configurable extra anything into people's newsfeeds, be it recommended posts, ads, or "people you may know" blocks, is a crime against user-frendliness. Those who do want to discover new things, will simply open the "discover"/"explore" tab that contains a dedicated recommended content feed on their own. There is no need to nudge anyone to anything.
People aren't stupid if you don't build your UI/UX around the assumption that they are. They also like transparent, understandable algorithms. Chronological feed of (only) the people one follows is as transparent as it gets. A chronological feed with some recommendations mixed in is more opaque and confusing. An algorithmic feed is an epitome of opaqueness. Opaqueness naturally drives users away because it doesn't exactly instill confidence that their posts will reach their followers.
Another example: do you understand what the "see less often" button in Twitter does? No one does. No one likes cryptic algorithmic bullshit forced on them with no way to disable it.
Choice is very important.
> versus what they actually do.
Do manipulations work? Of course they do. Are people happy when they are manipulated? Of course they are not.
> Recommendations aren't advertisements, sites don't make money off them
They absolutely do. Recommendations aren't there because Twitter wants to be helpful — they'd be more user-respecting as I said above if that was the case. They're there because they drive engagement metrics up, and those in turn translate into someone's KPI.
Do consumers want it, or is it merely taking advantage of some more subconscious human behavior patterns. And if the latter, is this something that is bad for humankind?
Consumers want a lot of things with negative externalities - goods that cost less because they're produced with slave labor, transportation that emits greenhouse gases, etc. Their preference shouldn't trump the obligation not to harm third parties.
Automated recommendations of a human-curated set of content - e.g. Netflix recommendations for its suite of programming - are much less objectionable, because they can't amplify anything the organization has not intentionally decided to present. It's the combination of UGC and ML recommendations that presents problems.
> But most of the time people like getting content recommended. It's what consumers want.
Do they, or do they just boost some KPI that suits proxy for actual utility?
Anecdotally even in non-tech circles most of my friends complain about how bad recommended content has gotten, or roll their eyes at whatever "personalized" ad for garbage they've been recommended.
I disagree, this isn't the nuclear option, the nuclear option is forcing these platforms to have a more editorial role in the content they're serving and that comes with a whole bunch of good and a boatload of bad.
Gigantic unmoderated platforms existing like this that promote random snippets of speech to drive user engagement and ad-revenue is a thing that shouldn't exist. The problem we still haven't solved is how to specifically kill off platforms of this type without killing forums and discussion boards in general. I think there is a distinction there but I'm not certain precisely what defines it - but if anyone figures it out please let us all know!
we've had forums and discussion boards for decades now that do not have recommendation features. I don't see why we can't put that genie back in the bottle.
IMO the moment you start highlighting things that people didn't explicitly ask for, it's an endorsement.
I think it's like Gerrymandering - yea we can all tell when it's gotten to stupid levels but the supreme court wasn't wrong to want a definition of where the line between "okay" and "bonkers" is. I personally think the decision could've been a bit more aggressive against gerrymandering but we do need some clear line to say "If you're beyond this you're doing an illegal thing" - and while we could close in on that line over time with a slow accumulation of precedent it'd be a lot cleaner to have a decent measure.
Is lying and deceiving people fine, according to the 1st amendment?
For example foreign states that pays armies of internet trolls to in effect choose president in the US -- is that what the 1st amendment wants to happen
I think "information" can kill more people than cocaine, is more dangerous
>Consumers also want cocaine, but that doesn't mean you get to sell it to them with impunity.
The appropriate situation for those that want cocaine is something similar to the rules around purchasing/possessing/using alcohol.
From an economic standpoint (increased tax revenue, reduced spending on "enforcement" and incarceration, increased economic output because fewer people are in prison, etc.) and a societal standpoint (more resources available to the 2-5% of folks who end up with dependency problems, reduced property crime, not harming communities with significant numbers of residents being pulled out of the community and incarcerated, etc.)
As such, there's no good reason for any mind altering substances to be illegal. Rather, they should be regulated and taxed appropriately.
I also wish that these decisions were more democratic. At the same time, personally, I think that the folks who made these changes did a great job. They're helping preserve American democracy.
In particular, I appreciate:
"10/2019 - Banned all political ads on Twitter, including ads from state-controlled media"
I hope Facebook employees are taking notes.
I'm also a huge fan of:
"...we will label Tweets that falsely claim a win for any candidate and will remove Tweets that encourage violence or call for people to interfere with election results or the smooth operation of polling places."
Does anyone know whether the next one is official US policy, or whether it's just Twitter's policy?
"To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls."
Mail-in ballots can arrive at the voting office as late as November 20th this year (depending on your state.. you still have to mail your ballot out by November 3rd, though) [0]. With so many people voting by mail this year, we might not know the election results until November 20th. I hope that election officials (and Twitter officials) will take that into account.
>But at their scale, if they chose not to try and assess the accuracy of information, but instead to blindly amplify it based on engagement metrics, that is also a political choice.
No that's an apolitical choice.
The political choice was not doing this in 2012 when it was Obama benefiting from it.
Welcome to the future, where we must chose between either drowning in a sea of misinformation or sustaining ourselves on a puddle of information that a "benevolent" third-party has deemed safe.
its amazing that this exact scenario was described in Metal Gear Solid 2, a ~20 year old video game.
>Colonel : But in the current, digitized world, trivial information is
accumulating every second, preserved in all its triteness. Never
fading, always accessible.
>Rose : Rumors about petty issues, misinterpretations, slander...
>Colonel : All this junk data preserved in an unfiltered state, growing at
an alarming rate.
>Rose : It will only slow down social progress, reduce the rate of
evolution.
A lot of people--authors, technologists, public intellectuals, and others in what was the broad spectrum of 'nerd culture' at the time--saw this crap coming decades ago but lost the fight to stop it.
The writing has been on the wall about the dangers of social media for a very long time. It's just taken this long for people with the power to even consider doing anything about the dangers to start taking it seriously.
I'm speaking from remembering things I read over the past few decades on Usenet (yes, that far back) and on blogs. I'd be amazed if any of this is still online.
Going back even further than that, however, James Burke's Connections warned of the risks of weaponized data mining in 1978, decades before Facebook commercialized the use of data mining to sell behavior modification as a service.
Dystopian futures aren't hard to predict. The hard part is getting enough people to listen to the predictions to prevent them from coming to fruition.
I vehemently agree with this .... however I think platform intervention against potential incitement is the best of a very bad set of choices available for how to handle the next six months.
The ability of social media platforms to spread malicious propaganda intended to incite division is incredibly dangerous. The moment where a US president has refused to commit to a peaceful transfer of power should he lose the election is absolutely not the moment to test how well society can withstand the use of social media as a propaganda platform.
Society needs a long term solution for social media--my take is that it should be shut down completely--but this is a short-term emergency and now is not the time to let the perfect be the enemy of the good.
Being cautious is not a virtue when erring on the side of caution may lead to tanks in the streets.
> The ability of social media platforms to spread malicious propaganda intended to incite division is incredibly dangerous.
No it’s not, this is propaganda. If Facebook/Twitter wanted to tomorrow they could limit stuff in your feed to people you know IRL. This would take less than 1 sprint for top tier engineers with the right access to implement and ship.
They’re only dangerous because the real problem is social media’s hypergrowth strategies are incompatible with policies that are good for society.
Any suggestion otherwise is like oil companies encouraging end users to recycle.
To be a complete curmudgeon, this just seems like self-serving nonsense from Twitter.
To pretend that they have some "democracy breaking" power in their platform of the loudest 1% of individuals that actually contribute and that this needs to be tamed with special rules to protect the integrity of elections seems like an absurd fantasy.
Either they're right and their platform can be a tool used for evil in general, in which case, why limit these rules purely to one particular federal election? Or they're wrong, and this really is just some bizarre internal marketing effort.
I think Twitter does have that power, but not in a way that Twitter or Twitter users understand. Most people aren't on Twitter and don't get their news directly from Twitter. And people who are on Twitter and who follow politics on Twitter tend to already be hyper-partisan, unlikely to change their minds in response to tweets. The majority still get most of their news from traditional news media. What's really been changed by Twitter is how the traditional news media is produced. Journalists themselves are very active on Twitter and report tweets as if they were news. So Twitter ends up filtering down to the public anyway, even if most of the public isn't on the platform.
As a result of this, Twitter's changes won't have much effect. It still all depends on whether journalists are reporting tweets to the public, and which tweets. Even if tweets get censored by Twitter, ironically that in itself becomes "newsworthy", and journalists spread the censored tweets.
Politicians love Twitter because it allows them to say whatever they want, without having pesky reporters ask them unpleasant questions. And the reporters nonetheless report these unfiltered messages (often lies) to the public. It's basically free press, free advertising. A politician doesn't have to be invited onto a news program, they can just make "news" whenever they want, in convenient soundbites.
The ultimate danger of Twitter to society is that journalists can't resist the temptation of reporting tweets as news. Of course this is a failing of journalists, not Twitter, but Twitter is giving these people a global public unfiltered platform they wouldn't otherwise have.
It's a myth that twitter drives news. Before Twitter we had "man on the street". Journos write the story first and then cherry pick the quotes they want.
They absolutely do have a major part of that power. Together with Google/Youtube + FB/insta, they pretty much control the political discourse. What other big platforms are there?
Without getting into the argument of whether any speech is apolitical or whether politics belongs in the workplace, Twitter clearly has a huge place in politics as a result of its nature as the premier social media platform for world leaders and journalists and anyone with an opinion today.
Regardless of their internal culture, Twitter will always have to make some political decisions.
> the premier social media platform for world leaders
Mostly one world leader. The rest of them seem to continue to primarily communicate through press conferences, television appearances and other fairly traditional methods.
Hard disagree. It’s inevitable with any company in their position. There should be no company in that position. We need to decentralize and federate. Mastodon is AFAIK the prominent implementation here but regardless, we need to start pushing for and exploring networks and platforms operated under completely different premises.
Just to clarify: You don't disagree with what I wrote, but rather with the existence of something that caused me write something like that in the fist place?
I completely agree with you and I used to work at Google and see this kind of activism first hand. It's not just activism though, the C-suites of these companies believe in this kind of thing.
Being pro gay rights has not really been reasonably mainstream in the US for quite a while now.
Overall, the majority has been pro gay rights since around 2004. By 2017 it was 70% for, 24% against. Breaking down by party in 2017, Democrats were 83% for, Republicans 54% for.
Women 73% for, men 66% for. White 73%, Hispanic 70%, Black 63%.
81% postgrad, 77% college grad, 66% some college, 64% high school or less.
83% 18-29, 79% 30-49, 65% 50-64, 58% 65+. See [1].
When it comes to same sex marriage, by 2019 61% overall approved. That's been above 50% since 2013, and above the percentage opposed since 2011.
Republicans still aren't majority in favor, at 44%. Democrats are at 75%.
By religion, 79% of the unaffiliated are in favor, 66% of white mainline protestants, 61% of Catholics, 44% of Black protestants, and only 29% of white evangelical protestants.
66% of women, 57% of men. 62% of whites, 58% of Hispanics, and 51% of Blacks. 74% of Millennials, 58% Gen-X, 51% Boomers, and 45% of the Silent Generation. See [2].
Why do you think the pressure is coming from inside? There are billions of people watching these platforms. Even governments scrutinizing them. Unruly employees are fairly easy to handle, as long as there's not outside pressure too, but in this case the outside pressures are immense.
Social network moderators are almost always low-paid, low-power, low-profile employees. It's not a great job. It's a high-volume job, like an assembly line. The highly compensated Twitter and Facebook software engineers are not doing the content moderation. They don't have the time, and they would run away screaming if they had to do it for an hour. It's likely that a lot of this work is even outsourced.
I assume they have created an hierarchy with a number of different roles. That's generally how you scale people-intensive tasks. I have no idea why you would want to classify all of those roles as "assembly line".
Let me ask this: what evidence is there that the results of Twitter's censorship are actually in line with the political beliefs of Twitter's employees?
In many cases, Twitter's rules have been used to suspend accounts that people thought they were supposed to protect. And Twitter has gone out of its way and contorted every rule in order to protect the President from censorship and suspension, out of "public interest", despite the fact that he has repeatedly violated the rules that would have caused anyone else to be suspended.
That makes no sense to me. Do you think social networks would be less toxic and less abused if all the devs at Facebook or Twitter were apolitical? How would that work?
It's the quest for "engagement" and getting always more users and ad views that generates this situation. What can be used to sell you shoes and earphones can also be used to sell you political ideas. When the algorithm wants to show you some inflammatory and misleading political factoid because it knows that it's very likely to make you react, it's working as intended. Not because it was written by a communist, but because it was written by somebody optimizing for this metric.
As a counter example: do you think HN manages to remain mostly not completely trash because it's run by apolitical people or because it's effectively run not-for-profit? I think I know the answer to this question.
It's amusing how I've seen this "internal politic" boogeyman pop up in discussions over the past month or so. As if all the woes of the silicon valley could suddenly be blamed on those pesky "woke" devs while everybody else just tries to get the job done. This American election can't be over soon enough, everybody seems to be losing their marbles.
I don't see how that has anything to do with the topic at hand. We're not talking about Twitter's success, we're talking about its influence on elections. If anything it's because it's been very successful (by some metrics) that it's in this position.
I'm not saying that an ultra-politicized workplace can't be an issue, I'm saying that it's silly to blame this particular problem on it. The very concept of ad-supported social networks is the issue, not the political alignment of the guy who writes the CSS.
There's a difference between corporate culture being 'apolitical' - and the responsibility of a large curated network to worry about misinformation.
Dang will kick you off HN for all sorts of reasons - that's his right, and it's generally not 'political'.
It's hard to ban 'fake news' without maybe getting possibly political, but it's not rocket science.
There are already all sorts of things on FB and Twitter that are censored, and it shouldn't be 'political'. If you say you want to murder someone, well, then that's a problem.
Now email that's different. If you want to be a moron over completely private network, or, you want to be a moron on your own website, then go ahead.
As a citizen of the world with several passports including a US one: this happens every 10 years. The last time was conservatives doing it so people are shocked that liberals are just as insular. The 90s PC culture wars are a thing that went in the memory hole really fast when Clinton needed the women who were accusing him to be sluts and harlots again.
On the bright side social media is becoming so unbearable that we'll see more decentralization. It's hard to get banned from a forum when you run an instance of it yourself. And hard not to be when they are run by the mentally ill who think that saying crazy is a bannable offense for being ableist.
>Do you think social networks would be less toxic and less abused if all the devs at Facebook or Twitter were apolitical?
It isn't possible to be completely "apolitical" but you can be non-partisan. For me this isn't a left vs right or Republican vs Democrat issue, its an establishment vs everyone else issue. If this policy was in effect in 2002 and 2003 people would be getting censored and banned for disputing "the fact" that Iraq had WMD. Just a few days ago NATO members blocked the former OPCW chief from giving testimony on the (very much disputed) narratives regarding the alleged chemical attacks in Syria. You can be sure the people being blocked and censored by Twitter won't be those pushing the official US government narrative. The argument Twitter is making that, "people are too dumb to figure out what is true, so we will tell them what is true" is very dangerous when Twitter doesn't have a monopoly on the truth, and is run by people with a vested interest and belief in "establishment" narratives - no matter how questionable (or provably false) those narratives are.
I'm not denying this at all, but again this is not the point I'm discussing. All companies have a political agenda one way or the other, even if it's just driven by profit.
As a thought experiment: imagine if Twitter did the same thing as Coinbase did, got rid of all the politicized people. Problem solved? Of course not. The problem is that Twitter has this incredible influence. Asking them not to do anything is also not neutral and not a solution, you'll end up with 4chan or gab.
Take WhatsApp in Brazil. It's been instrumental to Bolsonaro's election due to the astroturfing and dissemination of fake news by pro-Bolsonaro supporters. Do you think it's due to Facebook's employee being pro-Bolsonaro? Or Facebook itself espousing a pro-Bolsonaro stance? Of course not.
This is literally false. As you said, "SOME THINGS ARE JUST TRUE AND SOME ARE NOT."
I didn't buy it myself, and there was certainly significant opposition to the Iraq invasion, but to say nobody was buying it is a huge exaggeration, unfortunately.
That last sentence doesn't apply very often in the real world. We're not talking about philosophically pure truth in most cases, we're talking about things like headlines, in which the same factual event can be spun in innumerable ways, all of which are superficially true.
Bullshit isn't always, or even mostly, an outright lie. I'd go so far as to say that making shit up whole cloth is the rarest kind of deception.
How do you not get that this is the exact point? “Experts” in the US government and the “intelligence agencies” declared forcefully that it was true. You—-not an “expert” and not a member of “authoritative” US intelligence agency—-claiming that it was just “propagandistic patriotic fervor” would be “misinformation” stated “without evidence”. Stopping people from questioning the narrative is fundamentally against free speech and having a small group of people presume to be the final arbiters of truth with the power to suppress contrary speech is dystopian to the max. You want free speech when you agree with it and suppression when you disagree with it. That’s deeply unprincipled.
>Stopping people from questioning the narrative is fundamentally against free speech and having a small group of people presume to be the final arbiters of truth with the power to suppress contrary speech is dystopian to the max.
I am so sick to death of hearing this "argument." Because it's irrelevant to the situation. And I'll tell you why:
In the United States,
1. Private entities are not bound by the First Amendment. That means they can host (or refuse to do so) any speech they choose, because that's the First Amendment right of all individuals and private entities;
2. The US government is forbidden to be the arbiter of speech, except in a few, clearly defined areas (advocating the violent overthrow of the government, making credible threats of violence, etc.);
3. If you don't like decisions about speech that a particular individual or private entity makes, you are perfectly free to speak out against those decisions, organize others to do so, and/or vote with your feet/wallet and don't engage with that individual or entity;
4. Unless you have a management and/or ownership role in a private organization (e.g., Twitter), you don't have the right to change their policies or speech;
5. You absolutely have the right to speak out and say just about anything, but you don't have the right to force others to host that speech on their private property. And that's a very good thing. If that weren't the case, I would have the right to blare midget furry porn or all-goatse-all-the-time projections or all manner of other offensive, hateful, disgusting things in your living room.
I'm sure many will disagree with me. If you do, please engage me in discussion in addition to taking other action -- both of which (and thank you HN for making that possible) are your free expression.
I agree that private entities can largely can do whatever they feel like. But there is something deeply obnoxious and unprincipled when these entities piss on your leg and tell you it’s raining. They’re actively suppressing non-establishment viewpoints while pretending to be a non-editorial, neutral platform. And, perhaps most insidiously, because of their secret suppression of ideas and voices, they make it seem as if certain mainstream viewpoints are actually unacceptable fringe ideas. Suppose I ran a forum used to discuss crimes in my city, but secretly hid anyone’s post unless the crime they were discussing had an Asian or Arab person as a suspect? And then, suppose I also secretly deleted any comments from people complaining about the secret editing. That’s not the style of forum people should be defending. It’s legally permissible but all types of dystopian.
>I agree that private entities can largely can do whatever they feel like. But there is something deeply obnoxious and unprincipled when these entities piss on your leg and tell you it’s raining.
You won't get any argument about that from me.
>That’s not the style of forum people should be defending. It’s legally permissible but all types of dystopian.
I'm not defending any such forums. In fact, I avoid them like the plague.
But I do support freedom of expression.
And whether you (or I, for that matter) dislike Twitter or Facebook, or even Stormfront or the ACLU, while we are perfectly free to express our dislike, expose dishonesty or bias, recommend/create other forums and encourage others to do the same, we aren't allowed to block the speech of others.
What would you suggest as a viable alternative to the status quo?
I think Justice Brandeis[0] said it much more succinctly than I did:
"If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence."
You must be very young, because this is totally false. Both political parties, every network and every newspaper was not only uniform in their belief and assertion that Iraq had WMD, they ridiculed and attacked everyone who denied this falsehood.
>Israel, Russia, Britain, China, and even France held positions similar to that of the United States; France's President Jacques Chirac told Time magazine last February, "There is a problem—the probable possession of weapons of mass destruction by an uncontrollable country, Iraq. The international community is right ... in having decided Iraq should be disarmed." In sum, no one doubted that Iraq had weapons of mass destruction.
There should be no doubt that disputing the official narrative on Iraq WMD would have been censored by Twitter and other big tech companies that use establishment narrative as the only benchmark for truth.
>SOME THINGS ARE JUST TRUE AND SOME ARE NOT. There is no "two sides" to them. There is objective truth and objective lies.
And just who should decide what these things are, you? Twitter? The US government?
There was an incredibly harsh and damaging debate on this matter. Back then I was on the losing leftist side. We watched the Fox News broadcasts in horror.
Nowadays I find myself on the rightish side. Fox News is still insane though.
In my mind the current equivalent of the outsized influence of 2002/2003 Fox News programming is the political influence of employees of SV companies.
Somehow the common factor here is americans imposing their misinformed view of the world to the rest of the world with force, be it physical or cultural.
> the political influence of employees of SV companies
Maybe employees have power in Europe, but here in the United States employees have almost no power. Employment is at-will, and anyone who causes trouble for management is summarily fired.
Even management itself can be summarily fired by the board of directors of publicly owned corporations, like Twitter. Jack Dorsey is answerable to the stockholders, and also to the advertisers who generate Twitter's revenue. Not to the employees, who could all be eliminated and replaced if necessary. Jack too could be eliminated and replaced if necessary.
The only exception is Mark Zuckerberg, who set it up so that he has total voting control over Facebook and is untouchable by the stockholders. But that also means Zuck is untouchable by the employees.
The idea that SV corporations are run by the employees is a strange one.
Perhaps, but a problem is that people aren't nearly as good at discerning one from the other, and they are especially bad at conceptualizing the notion of unknown, which is what most things are. Luckily, the media is able to define a lot from that category as axiomatic, so most people no longer have to consider those ideas, provided those who aren't down with the program can be managed (which can also be effectively managed via axioms).
> This American election can't be over soon enough, everybody seems to be losing their marbles.
Brace yourself then, because this election will not end on November 3rd. And the craziness we see today will be nothing compared to whats coming next. Regardless of which side "wins".
This is literally the opposite of the conclusion you should be coming to.
This is what a lack of will to clamp down on misinformation, and lies from the President on their platform, out of fear of being of subsequently being politically repressed has resulted in.
Twitter is twisting and contorting itself in a million knots to avoid simply banning the president's account for the damage it is causing to society, the economy, and literally human life.
Silicon Valley tech workers were sold on a dream of “changing the world” rather than building any other normal business, so I would not expect any different really.
It's really become a religion at this point, with Capitalization of certain Sacred Words and canonical texts that are required reading to even engage in critical discussion of overwrought, overly broad claims.
The worst part is that the people in these companies rarely have any experience with traditional religion, let alone extreme/fundamentalist religion, and therefore don't see the obvious patterns used by these extreme activists. I grew up as a dissenter in a fundamentalist Christian family, and the reactions I got when I stood my ground and said that I believed in evolution and a billions year-old earth are incredibly similar to what I see when trying to argue with these extreme activists. Also similar to my fundamentalist upbringing are the subgroup (minority) of people who obviously derive immense satisfaction from their piety, and won't hesitate to condemn others in the in-group for not demonstrating their full dedication to The Mission. These enforcers fit a personality profile that is identical to what I encounter at my overly woke workplace. Recently one of these enforcers told a guy (he was raised by Polish immigrants in a poor, inner-city neighborhood of Philadelphia) that he was wrong when he committed the horrible atrocity of celebrating the purchase of his first car, and saying how he "deserved it" after all these years. She was quick to spoil the happy hour at the outdoor bar by forcing him to acknowledge his privilege.
It's our fault. We neglected and destroyed Usenet and made Twitter so large. Well, the latter is mostly the media's fault for paying so much attention to it.
Why not? Is Twitter even really a "tech company"? What tech do they even produce? They seem to me to be an advertising company that uses computers and the Internet to serve ads. They just happen to have a social platform they use to get eyes on those ads.
That's all Twitter sells is ads. You can't purchase any other service from them. The only purpose of the social platform is for there to be people to serve ads to. That's not a product, that's a marketing strategy.
So why shouldn't they have a moral responsibility to curb the use of their platform to do societal harm through the dissemination of misinformation? They don't exist to provide an uncensored social platform, they exist to generate profit through advertising. If their practices do harm they should answer for it.
It's just capitalism; there's nothing special about money having power. What's weird is that they're trying to diminish their power. It seems likely that while doing this, they're still selling more influence or making choices about what people see to benefit their shareholders
It's not your role to tell people to get they aren't allowed to use Twitter if they want to. Twitter's role is highly democratic; they force their content on people.
I strongly believe Twitter the company will come to deeply regret this direction. The employees who made it happen will simply move on to the next thing.
I'm fascinated by the comments here, which seem to be assuming that Twitter is operating in an environment in which the predominate source of trending information is well-meaning individuals and not motivated state-level actors intent on disrupting an election.
"predominate source of trending information is well-meaning individuals and not motivated state-level actors intent on disrupting an election."
What evidence do you have that the predominant source of trending information is motivated state-level actors? That is an incredibly bold statement, and the framing you chose is such that there needs to evidence to the contrary, rather than evidence to support such a claim.
Edit:
A bunch of responses are mistakenly thinking I am denying the presence of Twitter bots created and operated by state actors. I'm not. I'm arguing with the incredibly bold statement that they are the PREDOMINANT SOURCE of trending information.
> What evidence do you have that the predominant source of trending information is motivated state-level actors?
I didn't make that post. But the fact that state level actors have manipulated Twitter topics with bots, astroturfing, and other artificial means is pretty well established. I'm not sure I'd agree it's the "Predominant" source of trending information, that there has been state-level influence from foreign actors has been well documented.
I didn't argue that. I was arguing with the "predominant" part he stated. I'm not a blind moron. I'm fully aware of bot manipulation. Who isn't?
Yet I was downvoted for simply trying to reign in the completely overblown nature of these statements. These assertions since 2016 have been repeatedly made with partial and/or zero evidence.
There is still zero evidence that the manipulation of social media by Russian assets in 2016 actually affected the votes of the American public. As far as I can tell, the blue collar whites in the Rust Belt don't exactly have a large presence on social media, let alone any record of statements saying they are changing their vote because of an advertisement. This is all just playing into the hands of the DNC, who for two successive elections have manipulated the Democratic primary to kill Bernie Sanders, and immediately resort to "the lesser of two evils" mode to bamboozle his supporters into supporting their oppressors.
> I was arguing with the "predominant" part he stated. I'm fully aware of bot manipulation. Who isn't?
You could have easily pointed this out above without being so argumentative. If you are so concerned about downvotes, perhaps you should avoid being so confrontational.
But if you google "twitter bots state level actor" the first three links are two government sites and one is nature.com (as in the big journal Nature).
I can give lots of examples of googling that gets you to conspiracy sites, but I am not sure how those examples would prove anything...
Going by https://en.wikipedia.org/wiki/Combined_statistical_area
to get to 50% of US pop (165 mm) one only needs to cumulate down from the 23mm of the New York metro area to about Greater San Antonio (2,550,960 people), the 25th largest metro area in the country.
Of course, lacking a one person one vote principle, the 50% cutoff for electoral counts may look very different. Nie mój cyrk, nie moje malpy.
If you have never met a representative sample of half the country it seems reasonable to assume anything you don't like is done by someone evil outside it.
>If you have never met a representative sample of half the country it seems reasonable to assume anything you don't like is done by someone evil outside it.
That's part of the problem. You are assuming that everyone who doesn't live in a small town or rural area believes that those who do are bad, evil or wrong.
Well I'm here to tell you that you're mistaken. The vast majority of my fellow Americans (regardless of where they live) are decent people who just want to have a country that respects the rule of law, provides equality under those laws and gives everyone a fair shake to succeed.
There is a very small group that doesn't want that. And they seem to have successfully pitted us against each other, rural vs. urban, white vs. black, young vs. old, etc., etc., etc.
But those are false divisions. Sure, we disagree about a bunch of things. But the vast majority of us agree about much more than we disagree.
So please, don't just paint everyone that's different (in this case, the population density of their residence) with such a broad brush.
I was born and live in the largest city in the US. I've also traveled all over this country, and lived in cities, suburbs and small towns. What I've seen in my more than half-century of living, is that we mostly want the same things.
The problem is that we're not getting them! But it's not the city dweller or the rural resident that's keeping us from doing so.
We could make this country a good place for just about all of us if we stop hurling insults at one another and work together to make our government work for us.
Because, in the end, the government is us. That we allow ourselves to be divided, keeps us from working together to create the change we all want.
If we do that, and build bridges of communication to do so, we're in a much better position to communicate and compromise about the stuff we disagree upon.
I don't care where you live or what you look like. Americans are my brothers and sisters, and I want you to have the life you want to live. And I want to have the life I want to live.
Let's not let those who don't want that keep us from getting it.
I get the strong sense that a lot of commenters on HN don't have any understanding of either espionage propaganda operations or their efficacy, and it shows.
Probably more the idea of "it's better to have 10 guilty men walk free than to jail an innocent man". A tweeter may be a propaganda account, or it may simply be a guy with a strong opinion on something.
>First, we will encourage people to add their own commentary prior to amplifying content by prompting them to Quote Tweet instead of Retweet.
>Second, we will prevent “liked by” and “followed by” recommendations from people you don’t follow from showing up in your timeline and won’t send notifications for these Tweets.
These are great, I wish we could have these policies all the time, not just election season. This would probably harm engagement metrics (since retweeting without commentary is low-effort) but increase the quality of the feed by reducing noise and increasing context of tweets.
1. 'Misleading Information' is a targeted category, specified only by manual intervention, leaving the door wide open for selective enforcement and biased enforcement.
2. The 'context' they put in the 'Trending' section is often highly editorialized and skewed. An example from yesterday:
> Celebrities · Trending
> Mel Gibson
> People are expressing disappointment that Mel Gibson has been cast in a new film.
> 18.2K Tweets
Mountain out of a molehill, selectively chosen, click-bait editorializing ('people are expressing') - none of this is helpful, and doubling down on it as the only content to put in that pane seems quite foolish.
> 1. 'Misleading Information' is a targeted category, specified only by manual intervention, leaving the door wide open for selective enforcement and biased enforcement.
The "slippery slope" should definitely be on everyone's mind as this enforcement is rolled out, but it should not stop the enforcement. Otherwise your statement becomes the "if we can't fix everything we should fix nothing" fallacy.
I agree that the enforcement needs to be monitored, and also that the enforcement needs to be done. Not mutually-exclusive. There are absolutely agreed-upon lies today: there isn't a deep state of vampire child rapists running the country. Fight me. ;-)
True, but such enforcement requires accountability and governance in that case - things that they have shown no interest in adopting.
There is a consistent penchant among normal, biased 'fact-checkers' to call sympathetic views 'mostly true' in the face of glaring falsehoods, and antagonistic views 'mostly false' in the presence of any minor nitpick; nothing unusual as far as bias goes, but expecting these orgs to play governance roles is clearly unsuitable.
> things that they have shown no interest in adopting
Can you explain what you mean by this? Because that's literally what they (twitter) have been trying to do for the past 4 years: toeing the line between censorship and handling an administration that delivers a constant stream of demonstrable and egregious lies.
>I agree that the enforcement needs to be monitored, and also that the enforcement needs to be done. Not mutually-exclusive. There are absolutely agreed-upon lies today: there isn't a deep state of vampire child rapists running the country. Fight me. ;-)
Also Epstein killed himself and had no links to anyone in power. Even if he did they were only Republicans and even if they weren't they were.
I'm not sure why Twitter thinks that quote tweets are better than retweets. Propaganda exists to be spread, and it's long been obvious that "This is wrong/bad [quote]" just serves to further spread the propaganda and raise the propaganda artist's profile. Also, you can disable retweets on accounts you follow but not quote tweets.
> it's long been obvious that "This is wrong/bad [quote]" just serves to further spread the propaganda and raise the propaganda artist's profile.
This has never been obvious to me, and I continue to be confused by it as a position. It seems similar to "don't respond to anyone making blatantly false/harmful claims on discussion forums, because engaging with them just encourages them and spreads their message". In fact, in the Twitter case, it's even _more_ nonsensical to me - because the audience in a forum is general (and so, is likely to contain folks who are "on the fence" or who agree with the troll), but your Twitter followers are, by definition, those who hold similar opinions to you (and so, are likely to agree with your "takedown").
I'm not saying that you're wrong (I've seen enough apparently-smart people espousing this opinion to convince me that I'm the one missing something), I'm saying that I don't understand it. Can you help me understand what I'm missing?
What rdw said in a sibling reply. Also, it's all too easy to raise a troll's profile and turn them into an anti-hero or martyr by dunking on them. This is what happened with the POTUS. It's precisely the reaction he gets, the vehement criticism he gets, that makes him popular. His followers think, if he's pissing off a lot of people, he must be doing something right. The worst possible thing that could happen to him, from his perspective, is to be ignored. "All press is good press", as they say. In attempting to refute the message, you inadvertently make the messenger more prominent than they ought to be. You give them more public influence than they ought to have. This is how trolls rise to prominence, not only in politics, but in all areas. Look at the sports shows, where the loudest blowhards with the dumbest opinions — which they spout on purpose! — have the biggest audience. Dumb opinions make an inviting target to dunk on, and everyone takes the bait. The trolls want to be dunked on, time and time again. They want to be the go-to person for getting dunked on. If I make make a very bad Star Wars analogy, it's like when Obi Wan says "You can't win, Vader. If you strike me down, I shall become more powerful than you can possibly imagine." (Except Obi Wan is the bad guy in this analogy. Which he kind of is anyway, because he lied to Luke about his father.)
> Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question
It's not that they're better per se, it's that there's more effort involved. Pushing people towards quote tweeting instead adds a little extra friction and slows down the spread
It makes it harder for bots to just retweet and amplify without having to write unique content. If they all just retweet with "I agree", that is easy to Twitter to detect and filter.
It's true, but unless your strategy to is to deplatform a viewpoint, adding context is very powerful. Just read the NYT or the Washington Post when they talk about foreign countries and how bad they are. Sometimes they present innocuous facts and you think they're the devil.
Most of these decisions seem to be going in the right direction, but there is something ultra dystopian about the phrase “monitoring the integrity of the conversation” that gives me the jeebies. Why not just say “Moderators will be extra active and we’ll adjust as necessary until the election is over?”
Wow, some of these measures like removing out of network "liked by" and comment-retweeting by default are actually worse for Twitter's business. Looks like they are really trying to do the right thing even if it hurts them.
I think that I support actions like this, though I fear that the people who share fake news will also be the people to complain about Twitter and Facebook being a part of the mythical "deep state" once they start seeing these tags on their posts.
I think about my uncle or cousin who has been posting fake news since before it was a phrase, and how they'll react. It's probably not gonna be a positive reaction.
> In my video, Shellenberger dares say, "A small change in temperature is not the difference between normalcy and catastrophe." Climate Feedback doesn't want people to hear that.
If you are sailing on a boat made of ice through (salt) water that is 3 degrees below freezing, then a "small" change of 3 degrees absolutely could be the difference between normalcy and catastrophe.
Of course that's an extreme example, but presumably Climate Feedback wanted the video to acknowledge that the climate is a non-linear system, with feedback loops and potential phase transitions.
It seems absolutely possible that a change in temperature outside of the range that forests are historically used to could cause dramatically different outcomes for those forests.
OK, but that's not really the point. For purposes of this discussion, the point was that the video got rejected based on Climate Feedback. And it got rejected by them on the basis of "reports" from two people, neither of whom had actually seen the video, and neither of whom thought it was problematic once they did see it.
So either Climate Feedback rejected it on their own, but lied about why (or had a bureaucratic error), or two reviewers told them it was problematic and then lied about doing so (or both forgot that they had done so).
Outsourcing checking for disinformation to people who lie and/or have bureaucratic errors is going to have issues, even if those people are omniscient as to the facts of the subject matter... but nobody actually is that, either.
So "possible" now means "necessarily"? Because FB's checkers didn't take down any videos claiming that climate change exclusively caused the fires, did they?
It is clear as day that the views of the CEO and the Twitter platform has influenced the policies and biases against certain groups and enforced bans or shadow-bans on certain users and flags tweets from certain users.
If they themselves are compromised or are part of a scandal, they will ban anyone else from spreading this information to try covering it up.
You don't believe business leaders have the capacity to separate their personal beliefs from corporate policy for a company that serves over 300 million people across the world?
I believe, on the basis of evidenced behaviour, that that is not the case, no. They will consistently feel morally compelled or externally pressured by peers to put their finger on the scale.
I think using the word "disputed" is a big mistake. It implies that a lie has some validity and that there's actual public disagreement.
There's nothing wrong with taking a side between truth and lies. Even their example tweet should be clearly labeled as wrong, not "disputed". There is no serious dispute, only bad-faith political attacks against the democratic process.
When will Twitter provide all users with the ability to easily verify their identity?
One of its biggest issues is troll armies — this would quickly vanish if we could filter by whether someone had validated who they were and where they were located.
Nice steps, but seems a little late. I hope Twitter doesn't actually believe we are currently "ahead of the 2020 US Election". The election is already underway. More than 4 million people have already mailed in their ballots [1]. I guess better late than never but any social media company still figuring out their 2020 election policies is way, WAY behind.
Twitter keeps running themselves into a trap, and it is unbelievable to me that they don't see it: they cannot be the arbiters of truth. It will not work, because inevitably they'll miss something, and then people will assume that that missed thing is true.
The solution is NOT to do censorship based on truth. The entire system needs to be unreliable, because it is unreliable. People need to read things on twitter (and elsewhere on the internet) with skepticism, and they need to evaluate wether those things are true or false individually.
This is a good take. Skepticism is one of the most important aspects of thinking critically, and building a library of skeptical techniques so as to be able to sniff out things that aren't true is crucial to being able to navigate through a sea of information.
It's sort of like the Godel incompleteness theorem of social networking: you can either be incomplete or inconsistent; any system which tries to be perfectly consistent will find itself excluding massive amounts of potential dialogue. This in turn will result in a user exodus.
Sorry, let me be clear, it's absolutely a good thing, and I exodized (if that's a word) quite some time ago. It doesn't seem to be happening fast enough, which I will attribute to a lack of a clear, decentralized but dead simple alternative.
Look at how quickly something like BitTorrent was adopted back in the day.... it's an esoteric and difficult to understand thing under the surface, until clients and sites came along that made it super simple.
Maybe Urbit will evolve into this, maybe Mastodon, we'll see.
But there's no way to improve human behavior on BS detection. Certainly not this close to an election.
What is one to do in an environment where a ridiculous number of people believe that the president is fighting a secret pedophile sex ring? It certainly doesn't seem to be a problem the free market of ideas is addressing successfully.
I present you the differential truth timeline: After informing users, randomly inject tweets that state plausible factoids that may be true or false, then have a button on all tweets that you press to know whether twitter the believes the tweet to be true, false, or unknown.
If this doesn't work, they can remove the button entirely during sensitive times (like elections).
The principle of using skepticism is a sound one, but what do you do when the majority of your users don't employ that practice? Just let misinformation spread and just be okay with the ramifications on society because "it's not my problem my users are gullible"?
A meta comment: I've noticed this is a variation of another argument I hear from the national security side, usually from the Kissinger-Brzezinski-ites in particular who argue that their anti-democratic actions are because the people are too dumb to govern themselves (The so called "Crisis of Democracy") and therefore in some machiaviellian realpolitik way, their unconstitutional actions are justified. (One of Kissinger's famous quotes being something to the effect of the "The illegal we do immediately, the unconstitutional takes a little longer".)
I keep thinking back to the freedom of speech circuit Christopher Hitchens did and some of the great speeches on the topic he gave, some of which are him broadly denouncing the ability, want or acceptance of any person or group of persons to be the one who decides what you can and cannot read and/or say. I wonder where the voices of whoever holds those intellectual reins these days is, because I'm not hearing it.
Ideally twitter shouldn't really be able to spread "misinformation" because people shouldn't be using it as a source of information to begin with. These moves they have been making around fact checking, censoring things etc, make this problem worse. They're trying to increase people's trust in twitter, which they shouldn't be doing, they should be actively trying to do the opposite, and they will fail to remove all of the misinformation.
So the result will be that people will still see misinformation, but they'll trust it more. It's the exact opposite.
This was my take as well. I get that they want to “clean up” some of the most egregious tweets but that’s a game you can’t win. It’s just a rabbit hole where everyone will accuse you of being biased.
This is like saying "our system isn't broken, users just need to be smarter." Sure it would be great but I don't see it happening and in the mean time real harm happens.
Why doesn't that cause Wikipedia to completely break down then? Wikipedia tries to be in some sense an arbiter of truth, they do get it wrong frequently, and yet it still holds together.
Wikipedia requires sources for its information, especially anything contentious. If Twitter somehow obtained the manpower and speed to do that, then it might be OK but there's no hope of that ever being possible. The rate of content generation is too high, the demand for immediate publication is too high, and the effect of removing something a few days after it's posted is too little too late because it will have already had its effect. Nobody would use such a slow and boring service where almost every post gets automatically put on hold until someone can find a source. Users obviously won't do their own research - that's like schoolwork!
Wish somebody a happy birthday. Wait till tomorrow for Twitter's employees or crowdsourcing to verify it. And it only gets verified if that information is actually available to them.
Wikipedia has a community system and process to delete and revert statements, and contributors can mostly only evolve a collection of documents that do not draw attention to the contributor. Individual contributions mostly do not go viral. And for whatever reason, enough members prefer this system to keep it.
What will Twitter do when the President loses the election, as he is likely to, and then immediately takes to Twitter to proclaim that a fraud has been perpetrated, as he is virtually certain to do in that scenario?
Will they really just tag his claims with a "this is misleading?" Or will they take the approach that is most certain to preserve order, and ban the President?
This is not a drill. This is a question that Twitter will be confronted with this once, and never again. No new policy need be created. If principles are compromised in doing this, it is a one-off.
Putting aside debates about cancel culture and ideological bias, Twitter already censors various kinds of uncontroversially harmful speech, and the President's claim--to the small but not insignificant segment of his supporters who are angry, credulous, and well-armed--that the election was stolen will surely qualify as such a statement.
True, but a lot of, if not most, people don't trust the media either. I mean, just look what happened to the debate moderator today.
My personal take is that Twitter is mostly a big soapbox of fringe groups, mostly left, and half of that are bots. It's just a bunch of people screaming over each other and posting reaction GIFs, I don't see how any adult can find it engaging.
Maybe sites like Twitter, Facebook etc should be US only. I mean only US users allowed to use it. If they do this with the US election then which will be the next? UK? Australia? And why not really? People would even cheer for it
I think the situation in the USA is quite unique. UK and Australia is not (as of yet) in any serious risk of their democracy being undermined in the next election.
Sure you can say they could have had similar measures before the Kyrgyzstan or the Belarus elections. But there bad actors were perfectly able to undermine the democracy without the aid of social media.
Taking all the necessary steps so that one exact candidate (Trump) won't win the next election is "undermining democracy" already, especially as those steps now come from both business (Twitter, FB) and the media (Washington Post, NYTimes).
Maybe the Democrats and the people who are called "liberals" don't realise it just yet, but they've lost the game because they've started playing Trump's game. Like I said, they might technically win it this time around, but the next "Trump" will probably be younger and even more charismatic, and he (or maybe a she, why not?) will have to conquer a public scene which will have already known by that point that the democracy rules don't exist for either party, so why care for the democratic process anymore?
If it matters I'm not from the US, have never set foot there, just saying how I see things from half a world away.
If fact-checking the election results biases against one candidate and not the other, I suggest the issue is the behavior of the candidates, not Twitter.
It's unfortunate how few people on the inside don't see it this way. The slippery slope is real and we shouldn't start sliding down it just to beat Trump.
Note I never said “Influence election”. I said the more vague term “undermine democracy”. That was on purpose.
Social media is used for personal and political gain throughout the world. You could argue (and I wouldn’t disagree) that brexit happened because bad actors used social media to undermine democracy in the UK. However the UK is still democratic (at least to the same extent that it ever was). The same can not be said if Trump uses social media to declear him self a winner in a partially tallied election.
I think the nearest international example if the current (very likely) US situation is the Venezuela presidential election of 2018, where Juan Guaidó declared him self the president even though Nicolás Maduro won the election. But even then social media wasn’t the medium of choice to go about it (international leaders were; albeit international leaders that used social media).
Can people in the big cities not accept people in rural areas reject everything they think is good? It's not a foreign plot, it's a fundamental disagreement about values.
I don't see it that way at all, I think it's becoming more and more clear that there are actually conservative voices in these organizations (if not fully conservative, but at least economically conservative).
As social media gets larger and larger, these companies need to (1) at least pretend to be impartial and (2) show they are doing something about the "problem", or else the loser will blame big tech and slap them with regulation. Trump hasn't even lost and he's already talking about repealing Section 230, and Facebook (I think rightfully) is getting a lot of attention that could lead them to get broken up.
No one wants a target on their back when some research firm uncovers that "bad guy X" spent a couple million dollars astroturfing and/or abusing the platform or skirting election advertising regulation. It doesn't matter if that bad guy is Russia or some American Super PAC
Stop that apologist crap. I grew up in a rural area and I know these people quite well. People in rural areas reject science, general knowledge and social norms of the "big cities" because they are frightened and ignorant, not because of some equally valid world view.
Their response to climate science is stubborn ignorance. Their response to different cultures, races and religion is bigotry. Their response to democracy (when it doesn't suit them) is reactionary. Their response to education is closed mindedness. There's nothing to respect here.
Rural people are happy to have iPhones, as long as they don't need to understand the science behind the technology, understand the multicultural aspects of it's existence or respect the people who created it.
As Asimov said 40 years ago:
"There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that my ignorance is just as good as your knowledge."
I'll spot you that it's a fact, ignoring the racism behind how "violent crime" is defined. Using that fact as an irrational basis for despicable racist behavior doesn't jive with my worldview.
Ahh! Conservative projection at its finest, combined with a little cherry picked racism as well. Great job! You represent your kind well. I'm not a Republican so I don't deny actual facts.
Even if your numbers were up to date and accurate, which they aren't (get out of your bubble, Fox boy) the causes are tied to much more than race so it's a meaningless number designed only to incite racial hatred. Good job again!
To repeat: Fear and ignorance. Which you just gave us a great example of.
Also, to the other HNers. It's election season, so enjoy the "overly political" comments for a while.
I'm not exaggerating when I say that I truly beleve if Twitter shut down for a few weeks, the world would be a better place. Twitter's influence on world events has been a net negative because of their refusal to apply rules evenly across their users and willingness to turn a blind eye to blatant manipulation by government entities and others filling their site with propaganda.
Free speech has taken quite a beating in recent days. It's weird watching many of my friends and people I respect cheering while it happens. I honestly don't know what we do to fix our current situation where lies so easily go viral, but making Twitter arbiters of truth doesn't seem like the best solution.
Twitter is a community and all communities require some level of moderation.
There's never a situation where you let people say whatever they want, whenever they want without any moderation at all and everything goes well. You have to set some of sort of standard. We already do this as a society, so why shouldn't that extend to Twitter?
Twitter has 330 million monthly active users and 145 million daily active users. Is that a community? It seems more like an unruly mob. :-)
I agree with the point that all online forums should have moderation, but for me the question is, should something like Twitter even exist? A centralized world discussion forum is not necessarily a good and healthy thing. Can humanity handle having Twitter?
> If twitter disappeared, 5 copycat sites/apps would emerge from the ashes overnight.
5 is not necessarily a problem. 1 is a problem. :-)
It's not easy to reach "critical mass" though. Many social networks have tried and failed. Certainly the Twitter alternatives (e.g., App Dot Net) tried and failed. And hopefully we've all learned some lessons from Twitter and won't make the same mistakes again.
Or if we did make the same mistakes again, then humanity is truly doomed, and there's nothing we can do to stop it...
Twitter is absolutely a community, it's just buckling under the weight of its scale. The problem we're facing right now, to me, is that Twitter and Facebook have built communities that they are incapable of moderating effectively because of their scale, but they ask us to give them time to manage this difficult problem and we give it to them, absorbing more damage while they make billions. How much time do they need? They've all been around for over a decade. It's time to start demanding results.
These platforms aren't required to exist. If they can't prevent their services from causing damage to societies across the world, then they should be required to fix that. If they can't fix it, then they should be shut down.
Good moderation sets standards for how we communicate that everyone is honestly able to meet regardless of their worldview. Taking this site as an example, the guidelines focus on the tone, relevancy, and novelty of your responses. It encourages civility and curiosity, it does not try to calculate the truth value of what I'm saying.
I see many "it's just moderation" takes on what Twitter is doing, but what Twitter and other platforms are now doing goes beyond what most platforms have traditionally enforced in their moderation. Hiding user content that the platform unilaterally perceives to be untruthful is really a new milestone.
But because this forum has guidelines for those aspects of speech, and its members abide by it, it opens up space for disagreements to take place and truth to surface.
Twitter has guidelines rules for things you can and can't say but since the users don't all agree to the same guidelines, and Twitter enforces them very inconsistently, there's no pressure to follow them except to the point you may get suspended or banned from the site.
Well, you start by acknowledging that by hosting hundreds of millions of people to your platform, you're also hosting the real world problems that come with them. And how you handle that a scale while mitigating the consequences turns into a nightmare.
Rather then evolving along with the needs of their users, Twitter froze it's functionality years ago. Twitter could have added a range of moderation tools, allow users to opt-out from re-tweets, trends and such, allowed people to create groups and communities, foster active cooperation with key users and it's communities,...
Why didn't that happen? Because those hundreds of millions of users aren't paying customers. Investing in all of those things simply isn't worth the investment or pressing enough in terms of optimizing for revenue through advertising and business intelligence.
An audience is a valuable commodity, and so what Twitter doesn't want is risk losing that commodity. Tweaking the functionality of the platform is such a risk.
However, hosting hundreds of millions of people who aren't customers, is also a huge liability. Worst case is having governments representing those poeple imposing hard regulatory frameworks that hurt revenue and profitability.
In a way, it's akin to the lions of the circus. People pay good money to see the lions perfom, but the circus has to accept the risks that come with keeping lions. Which includes getting shut down because the lions escaped and went to town.
So, why does Twitter cater to hundreds of millions of people, and why does the circus keep lions? Because the profits they gain from doing so outmatch the risks they have to accept.
Getting back to your original question, Twitter is basically Humanity's stream of consciousness materialized. It's prohibitively expensive, and rather utopian, to be able to moderate that in a meaningful way which caters to everyone's contentment. By contrast, Twitter's aspiration isn't to provide the best moderation, it's to implement just the bare minimum in order to not lose it's value.
Looking back at the blogpost, you'll find that most of what they propose is tweaks to how they filter content and a small tweak to how you do re-tweeting based on existing functionality. No fundamental changes to the feature set are put into place. On the scale of Twitter's operation, that's mainly targetting low hanging fruit since introducing more substantial changes would be a huge gamble that might end up hurting the business.
I think the “how” is pretty simple: the company is made up of people, the people define a set of principles, and adhere to them. They don’t have to accommodate everyone (and in my opinion, shouldn’t).
I think the hard part is doing this in the face of money. We have all seen how platforms allow awful things to exist because of the economic incentives to do so.
Not elected people, not respecting the most basic of constitutional principles. Not only that, but you can't enforce your "set of principles" on billion of tweets each hour of the day, not by people at least, so you defer to bots, which are incapable of discerning what is free speech. They can't apply your principles with the discernment of a human, they can't enforce what's legal or illegal, like they can't recognize copyrighted music from public domains ones.
I disagree with the constitutional angle. I do think enforcement is possible, it's really no different from enforcing the law IRL. I don't think having no terms and/or requiring private companies to have no terms is a better situation.
>> There's never a situation where you let people say whatever they want
> What? There are tons of places where the only restrictions on what you say, is determined by the law.
So there are still restrictions then and people can't say whatever they want?
> The standard could be "Whatever is allowed by US law", which is extremely non restrictive on speech.
voat.co is a clone of reddit where the only difference is no restriction on speech. The top posts are usually extremely racist and antisemitic. It is a horrible horrible platform and the only change is no restriction on speech.
The amount of speech that is allowed by US law is so extremely broad that those restrictions may as well not exist.
IE, lets say someone were to argue "I think it is totally OK for the government to arrest and execute people who disagree with the government in any way!", and then backed up this belief by saying "Well, you support restrictions on free speech as well! You don't think that people should be able to send mass death threats to everyone, 100 times a day! Therefore, since you support restrictions on free speech, it is totally OK to arrest anyone who disagrees with the government in any way!"
This is the argument that you are making. And it is a bad one.
The reason why it is bad, is because the restrictions on speech, in the US are almost non-existent, and that is not a good justification to do other things that are much more restrictive, such as in my extreme example of arresting anyone who disagrees with the government.
You made some huge leaps in what I was arguing. I never said the government should expand their freedom of speech restrictions or anything like that. I simply said there are still restrictions, even if they are small and that sites without restrictions are horrible.
But the point is that the fact that they are so extremely small and minimal restrictions on speech in the US, is not a good excuse to justify much larger restrictions.
If the existing restrictions of US law are so minimal, it makes no sense to bring it up, as any sort of justification for much larger restrictions.
It is just not relevant to anything, to mention that, because those restrictions are small, and are therefore not related to much larger restrictions.
> The standard could be "Whatever is allowed by US law", which is extremely non restrictive on speech.
That would result in 90% of user generated content being spam. There's no law against me sending you 10 spam DMs a day on Twitter. Is that really what you want?
Giving people the choice of whether they want to accept moderation or not, seems reasonable.
For example, if twitter allowed everything that is allowed by law, but also gave people control over content that they see, then that seems fine.
There is a fundamantal difference between disallowing something, and giving a user the option of not seeing certain content.
IE, I do not believe that the block feature is censorship, for example.
> Is that really what you want?
I'd want to have the option to control the content that I see, as opposed to twitter forcing its own decision. I don't see a problem with someone choosing to allow spam to themselves, if they are OK with that.
Any platform ought to have the right to prevent falsehoods from being spread on the platform. A platform also should have to right to choose what topic, otherwise a car forum could be overrun by motorcycle enthusiasts. As long as a platform applies its rules in a just and fair way, i see no issue with preventing lies or keeping the topic on track.
Twitter does. On Twitter. That's practically a tautology.
Is your point that you didn't know the answer, or that we should descend into anarchy because attempting to answer difficult questions is tricky?
Both sides think Twitter is biased against them. The only reason you think it's biased is because of the same tribalism that you're baselessly accusing them of.
It's blatantly obvious that Twitter can never stray far from the middle because it would lose them one half or the other of their audience and business.
>But who gets to decide what is truth and lies? If twitter was bi-partisan, sure, but the people in charge clearly have a heavily bias.
You believe that and I respect your beliefs. I don't use Twitter, so I'm not really in a position to agree or disagree with you.
That said, so what? The shareholders of Twitter own the company. If the company is doing what the shareholders want (within the strictures of the law), then so be it.
Free speech cuts both ways. I can say just about anything I want. But Twitter or Facebook or HN, for that matter, is under no obligation to host my speech.
The only entities that are forbidden from censoring by the First Amendment are the Federal government,(and through the 14th Amendment) as well as state and local governments.
So if Twitter censors (or doesn't censor) content that you think shouldn't (or should) be censored, you are free to argue for your point of view and encourage others to do the same.
And Twitter can do the same.
Even if (and I take no position one way or another) Twitter is biased and acts upon those biases, so what? That's their right under the First Amendment.
It's messy, and it pisses people off, but there's no provision in the First Amendment that speech must be orderly or inoffensive.
> It's weird watching many of my friends and people I respect cheering while it happens
They understand that "free speech" is one of many principles we value, that these principles can clash, and we inevitably have to choose between them in cases. Even if those choices make some people uncomfortable. Before today, speech was neither offline nor online so free as to permit fraud. It's a terrible civic abdication to write off criminal behavior because it's so easy to get away with online and because it looks so much on the surface level like "free speech."
Or because we live with too much fear and paranoia to accept even a discussion of establishing standards. Twitter making any judgment that some content is fraudulent is not the same as becoming the editorial boogeyman from the extremist side of whatever political party is opposite to yours. That's fear. That's an appeal to the slippery slope fallacy.
My trip to the store is better off for knowing that if someone was shouting about about Jesus or conspiracies, they'd be asked to leave. We don't question that limit on speech. There's a reason you see people doing that on street corners. If there was never any deplatforming, all platforms would inevitably suck.
Is it ideal that Twitter and Facebook have so much power? Maybe not, but that's just the reality we're in. We can't be paralyzed into inaction. There's no perfect body that everyone would trust to do the job and someone has to keep Twitter and Facebook from sucking too hard. Don't let your paranoia make you overlook that they'd lose half their audience/business if they ever strayed too far from the middle.
I agree. I'm part of the "let the buyer beware" camp, despite the chaos that might entail. I would rather see social media sites focus more on not putting people in bubbles rather than trying to police what they say.
If people can't see through the BS, then we get what we deserve. Every change like this is going to have unconscious bias and will likely be taken as proof of manipulation by people who don't share that bias. The only winning move is not to play.
I have this gut instinct we're veering towards the way China censors their internet. The great U.S. firewall, to secure our citizens from "disinformation"!
As far as "great firewall style" American censorship (i.e. state removal of content on a categorical level) the only proposals I'm aware of have been to block TikTok and WeChat on the grounds of "national security".
I think the only way out is for governments to prohibited targeted ads as a business model. It’s deeply subversive to the informed populace when everyone lives in completely different realities based on their advertising profiles.
Making a business model like that illegal would force these services to go the paid route, or have less a subversive, and alas, less effective, advertising.
10 years ago, it was most China we spoke about regarding censorship and "social credits". The CCP has made deep inroads into US society during this time. They're succeeding in changing peoples' thoughts and feelings regarding free speech.
You also never had the ability to be talking on somebody else's property while sitting in your own home until very recently. Things change. The streets are still public but the streets aren't where we're talking anymore.
The classic rebuttal to free speech arguments which I'm sure you've heard, is that the first amendment doesn't apply to private companies, and that your right to free speech doesn't entitle you to a megaphone, etc.
I think a more nuanced and useful way to look at things is to think of Twitter as an amplification machine rather than a speech machine. I can say what I want out loud, I can write whatever letters I want, I can make my own website if I want, etc., but putting it on Twitter causes Twitter to amplify it. Many of these announced changes pertain to what Twitter chooses to amplify - and how - rather than what it permits people to say. (As far as I can tell, the only tweets they are actually removing are those that call for violence, a standard for censorship that seems quite reasonable.)
If we think in terms of how and when to amplify speech, rather than trying to figure out what kind of speech to censor, we can hit upon more workable improvements. Twitter's proposals here, under that framing, are a mixed bag.
Twitter provides several ways to amplify posts - some of which are intentional on the part of users, some not. For example, if I follow a person, I'm telling Twitter to show all that person's posts in my feed. If I reply to a tweet, I'm telling Twitter to show my post to that person in their notifications, and also show it to other people who engage with it. If I quote-rt a tweet, I'm telling Twitter to show it to everyone who follows me, alongside my commentary. Etc.
On the other hand, if I like a post, or engage with it in any way, I'm not telling Twitter to show it to anyone - but my Like may cause it to recommend the post to others, sort it upward in the algorithmic timeline, etc. This unintentional amplification can have unintended consequences, because the system cannot tell when engagement metrics are due to positive or negative characteristics of the post.
Quote-retweets are also rife with unintended consequences. If someone "dunks" on a post by quote-retweeting it with criticism or mockery, they're betting that their comment is going to lower the status of the person they are quoting or persuade people the post is false. But the folks reading their post may not agree - and the original post might have been an bad faith attempt at distraction, which a dunk then amplifies. Alternatively, if a popular account dunks on a much less popular account, it can (sometimes intentionally, sometimes not) trigger a wave of hostility and harassment.
So I like parts of Twitter's changes here - they have the right to try and amplify true information more than false information, and removing flagged posts from recommendations will do that. Additionally, removing recommended content from non-followed accounts from the algorithmic timeline is positive as well - it reduces unintentional amplification and puts more control in the hands of users. But their encouragement of the quote-retweet is concerning. They don't seem to realize how effective a weapon it can be.
I would argue that any automated recommendation of user-generated content needs to be carefully controlled, if not abolished altogether. Recommendation systems cannot distinguish between content with high engagement due to quality, and high engagement due to emotionally manipulative dishonesty or other negative factors. And specially interested (or bigoted) political actors, who are simply interested in "the most effective way to attack / promote X" rather than arriving at the most truthful position, can test and manipulate those recommendation systems far more effectively than folks trying to engage with nuance and good faith.
This "situation where lies so easily go viral" seems to me to have intensified starting in around 2014 to 2015 - when Twitter introduced the quote-retweet, and Facebook introduced the algorithmic timeline. I don't think "free speech" is the right framing for thinking about it. The recent phenomenon is not the existence of extremist political movements or medical misinformation, but rather, their amplification.
a) You have no right to free-speech on Twitter or any other platform.
b) When people spread false information at scale, it risks the very foundations of civilization.
Anyone with an actively managed platform has to try to ensure some kind of intelligent fealty - the trick is to do it without bias or and kind of ideological orientation.
Again, you don't understand what that means. They are a private entity - "censorship" has no meaning there. You have exactly 0 rights on the twitter platform, as it's not a government entity.
That's not exactly true. The public has the right to have all laws applied fairly. The social media platforms have enjoyed the rights of a neutral forum without the liabilities that come with being a publisher. Yet the platforms act like a publisher, deciding what is seen or not seen, and the public directly or indirectly suffers as a result. This is a pretty clear case of actual rights being violated, despite the distracting narrative of "muh private company".
The tech companies have a clear liberal bias, despite what the media wants you to believe. Tech savvy people tend to be more liberal, and tend to spend more time on social media, so you have a bubble of opinions that drown out the other noise (which already gets downvoted to the bottom, shadowbanned, or censored anyway), resulting in the faux appearance of a consensus that there isn't a real problem here. Anyone with enough influence to have this opinion really noticed in the mainstream media, will be discounted as a "conspiracy theorist".
No one is making Twitter an arbiter of truth. If you don't like the way Twitter does business, there are other platforms that will cater to your alternative truth needs.
I'm not seeking alternative truth. Everyone cheers for Twitter because they think they are going to censor people in a way they agree with. But what about the day that changes? What about the day they decide to censor you instead?
On one hand, I agree that businesses should be free to operate as they like & you can vote with your feet if you don't like the rules. However, we also have to be realistic about the fact that the largest tech corporations are now more powerful than most countries. Opting out isn't a very practical solution.
> the largest tech corporations are now more powerful than most countries
I think this is fundamentally the problem, and not free speech. Sites like Twitter and Facebook are of unprecedented historical size. Nobody really worried about whether some little discussion forum allows free speech or censors itself. But Twitter and Facebook want to become "the world's discussion forum", and I'm not sure that's even a thing that should exist. You can't really have free speech if there's only a small number of platforms for speech.
Then maybe vote in better people who actually give a shit and break up tech monopolies, otherwise we're just gonna live in hell world. The Clinton's should've broken it up in the 90's and Bush's in the 2000's; but they never went all the way and we've seen this behavior continue too the current admin.
>But what about the day that changes? What about the day they decide to censor you instead?
What about it? People get banned from Twitter all the time - literally every hosted platform and service moderates and bans content as they see fit. The terms of service you agreed to when using that platform likely includes phrases like "for any reason" and "in perpetuity throughout the universe."
So I guess I make another account? Or go somewhere else? That's not exactly the boot of fascism stomping on your face forever.
>However, we also have to be realistic about the fact that the largest tech corporations are now more powerful than most countries.
No they aren't. Countries have armies and the monopoly on violence. Countries can arrest you, torture you, confiscate your possessions, make your beliefs illegal, and murder you. The only thing Twitter can possibly do to me is delete my account or ban me. They only control their one platform, they don't control the internet or the entirety of media. They're not going to send me to the gulag or throw me into the ovens. They're not going to erase me like Stalin.
To say that any social media platform is more powerful than most countries is ridiculous.
Social media platforms have an even more powerful weapon: the ability to share your perception of reality. Sure, they can't arrest you, but they can slowly shift your world view just by amplifying some stories and not others.
I do not consider this to be a major achievement on my part, but I have successfully managed to navigate my way through life without using Facebook or Twitter to facilitate my worldview. There seems to be this assumption that we are compelled to use these services to engage with the world around us. Heartbreaking to see this...
> That's not exactly the boot of fascism stomping on your face forever.
Alright. Let's say that all of the major social media platforms ban any discussion of raising their taxes, or enacting more regulation on them, or let's say they straight up ban major politicians from the platform.
Oh, and also there is no other significant competitor that matters, and it is unlikely that any competitors will pop up anytime soon.
Are you just OK with that? You are just going to say "well, I guess it is their platform, and they can do what they want, and it doesn't matter that no competitor has any significant chance of being successful".
> They only control their one platform, they don't control the internet
Ok, now what is almost all of the major platforms do it, and there are no serious competitors?
Let's expand this out even further. What if Walmart did the same thing? Along with multiple other grocery stores.
You want to raise their taxes, well sorry, you probably aren't going to be able to buy food from any major grocery store.
Or how about if common carrier laws were removed and your power company did it? Or the water company, now that this is legal?
Your passive acceptance of this type of being can be extrapolated out to horrifying results.
>Let's say that all of the major social media platforms ban any discussion of raising their taxes, or enacting more regulation on them, or let's say they straight up ban major politicians from the platform
Unless we're talking about a situation where a government controls the internet and makes it illegal for anyone but those social media sites to set up a server or host content, then that might create an immediate market demand for alternatives and those alternatives would appear, although alternatives would very likely already exist.
>Oh, and also there is no other significant competitor that matters, and it is unlikely that any competitors will pop up anytime soon.
You keep piling on qualifiers like "significant" and "serious" yet before social media silos it was entirely possible to reach millions of people and go viral with hosted forums and personal webpages. Hacker News alone gets a ton of traffic and it's hardly mainstream. What will happen is that the web will adapt as it always has.
It's not as big a problem as you make it out to be. Don't confuse the size of these sites' userbases with proportional degree of control over anything outside of their domain.
>Ok, now what is almost all of the major platforms do it, and there are no serious competitors?
>Let's expand this out even further. What if Walmart did the same thing? Along with multiple other grocery stores.
>You want to raise their taxes, well sorry, you probably aren't going to be able to buy food from any major grocery store.
(...)
>Your passive acceptance of this type of being can be extrapolated out to horrifying results.
Everything can be extrapolated out to horrifying results if you try hard enough and care little enough about reality. But your scenario in which every social media site (including those hosted in other countries,) and every businesses and government utilities and services conspire to control all forms of communication and deprive people of basic services as a means of oppression is so far removed from any conceivable reality that I have to question whether you're commenting in good faith. Otherwise, you're doing a good job of making my point for me.
> and every businesses and government utilities and services conspire to control all forms
They don't even have to all conspire together. Instead, merely one of these companies doing it, alone, could have a huge effect.
For example, a power company, or water company doing this, all on its own, not conspiring with anyone at all, would be very bad for society, if it were legal (fortunately, it is not legal for a power company to do that).
This is because it would be extremely expensive, and difficult to get another power line, or water pipe, to your house. There are huge barriers to entry.
If only a singular power company did this (after the law was changed), if would be very difficult for any of their customers to resist such changes. They'd have to take extreme actions, such as moving, or paying for whole new pipes to be dug in the ground to their house.
Your comment comes across as smug and condescending. Not to mention that moving to another platform doesn't change the secondary and tertiary effects that mass censorship has on society, and therefore you, as a platform user or not.
Just turn it off for four months. Just turn the whole thing off and everything will be fine.
Honestly I wouldn't mind if the government DDOS'd them, or starting blocking all social media sites for four months. This is an emergency and these corps are behaving in an irresponsible manner which threatens our national security.
>People on Twitter, including candidates for office, may not claim an election win before it is authoritatively called. To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls.
Given how poorly the national news outlets performed on this the last presidential race, Twitter should really just limit this to state election official only.
At a high-level it’s simple: Cryptography combined with peer-to-peer routing. Cryptography provides private communication and sovereign identity, and peer-to-peer overlay routing provides censorship resistance. The technology already exists for direct, verified, and private communication between peers over any network topology.
The challenge is combining the aforementioned technology into an accepted standard and intuitive user experience. The business model would be straightforward hosting. A major barrier is the lack of funding to get it off the ground.
Honestly, if one of the billionaires bought Twitter, used patents and the legal system to litigate ruthlessly over any new startups that tried to emulate it, and kept any similar system off the internet for 10-20 years, I honestly think humanity will be better off.
If the only thing stopping one party from ripping up ballots for their opponents is that someone might be able to tweet about this happening, then the election has already lost its integrity.
It's also worth thinking about which is more likely to happen: Twitter suppressing a true claim about election integrity being violated, or Twitter suppressing a false claim. Bear in mind that anyone can make a false claim.
From Facebook's point of view, we have recently learned that the whole circus around Cambridge Analytica was a gigantic farce and that they had no more access to anything than anyone else - and that they hardly did anything with it to begin with.
2016 taught us that even if everyone puts their chips on one candidate, and says that they have a 99% chance of winning, it may not be a reflection of reality whatsoever.
Now, given that Twitter is primarily a Democratic Party platform (no silly HN poster, I don't need to post evidence to back that up), I wonder why they are deciding to do this.
Do they know something about the election trends and which way the winds are blowing that they are intentionally trying to silence it to influence things one way or another?
If you have anything left in that head of yours, my dear HN reader, it would be a good time to flex it and read between the lines when these posts turn up. And to not believe everything you read, especially from corporations that hijack social and civic responsibilities for financial gains.
At any rate, make sure you get a comfy couch and some popcorn if you haven't. This will be a fun ride in a months time.
I want to applaud this, because it's clearly a good move. but i don't see how they can square "our recommendation engine is harmful enough that we need to disable it to protect the security of an election" with "we're gonna turn it back on in four weeks"