Hacker News new | past | comments | ask | show | jobs | submit login
Man Who Built the Retweet: “We Handed a Loaded Weapon to 4-Year-Olds” (buzzfeednews.com)
106 points by rmbryan 6 months ago | hide | past | web | favorite | 137 comments

Wetherell was not involved with launching retweet, and didn't implement much (maybe any?) of the code in that Nov 2009 launch. He probably did write a version of the frontend for retweet at some point, and had done some backend code, but all of that was largely rewritten by launch day.

To be fair, he probably didn't claim that, but making his involvement out to be more than it was probably did well for the article.

Reading through some of the comments here, it occurs to me that there's a moral trilemma at play here: you can have power, you can have righteousness, or you can have impartiality, but you must pick 2 out of 3. If you bring something new into the world that gives new voice or new capabilities to previously disenfranchised groups, then you have a choice between explicitly selecting who you bestow this power on (in which case you can preserve your righteousness, but sacrifice impartiality), or giving this gift away free to everyone who can make use of it (in which case you remain impartial, but will inevitably end up empowering people you find morally abhorrent). Or you can choose to do nothing and never bring anything useful into the world, which is also valid, but means you're eclipsed by people who do.

Silicon Valley (and science/tech in general) has traditionally selected power & impartiality, while nation-states and religions have traditionally selected power & righteousness. Many of the commenters here would seemingly select righteousness & impartiality, which perhaps speaks to why we're discussing this on a message board rather than bringing startups into the world.

"Choose two out of three" triangles are sexy, but I fail to see how this applies here. Doing nothing doesn't really mean choosing to be impartial and righteous at the same time, and there is no way to both moderate and be impartial if you actually run a platform.

I see it as every platform being able to choose a point on a sliding scale between enforcing its morality and being impartial. The more moderation you do, the less impartial you are, sure - but nobody actually wants the extremes of this scale.

I don't think that argument applies here. The article isn't advocating censorship or legal restrictions on social media. Instead, it's saying that reducing friction from a very specific content-sharing workflow and giving it a prominent position in the UI of social media sites may not have been a good design decision.

Edit: I intended this as a reply to a different comment, but it has a reply now so I'll leave it as-is.

It does apply a little if you extend the definition of "moderation" to include choosing the right set of features in order to direct your users' behavior.

That said, I was mostly replying to the parent comment, not discussing the main thesis of the article.

You can choose one or none if you want to. You just can't choose all three, because then you're doing what the article's subject described as 'handing a loaded weapon to 4 year olds'.

It's absurd to describe groups whose concerns have been primary focus of American politics for 50 years, who benefit from numerous policies granting preferential treatment and have whole departments in universities devoted to their interests as "disenfranchised".

And that's where most attempts to be "righteous" fail: either through shallow thinking or the desire to be popular, they favor those groups that are already politically powerful and ignore the truly disenfranchised.

Interesting that only one of those three qualities is subjective. I also think you are incorrect about this being a pick-two situation. Righteousness, in addition to being subjective, is mutually exclusive with impartiality.

The way you remain both righteous and impartial is to never do anything consequential. That way, you can have the same effect for all parties while never doing anything you would find morally objectionable. It's essentially the trivial solution (magnitude of impact = 0), but is often implicitly advocated for in many debates here.

It occurred to me after writing this that there's a fourth variable: integrity, defined as your willingness to have principles and critically evaluate them against others' moral conduct. You could conceivably satisfy all of power, righteousness, and impartiality by assuming, in your value system, that everyone's values are morally acceptable and mutually compatible and hence there is no conflict between your values and everyone else's values. This is a position actually advocated by some philosophers, venture capitalists, and scientists! (There's a Tom Lehrer song in there somewhere: "Once zee rockets go up who cares where they come down / that's not my department / says Werner von Braun!") It's not a mainstream one within the general population, though; most people do actually have some feelings and pass some moral judgment on other people.

And yes, I was very deliberately trying to highlight the tension between objectivity and subjectivity here. (There's a lot of complexity over whether power is objective or subjective, too - power is your ability to enforce your will [subjective] over reality [objective], so it has elements of both.)

What you will may be subjective, but your ability to bend others to your choices is not. Thus I'd argue power is objective.

Righteousness occurs as a value assessment within a moral framework. When all values, including opposing values, are of equal worth, we have a framework of amorality, which cannot ever produce a judgement of righteousness or wickedness. In other words, the term itself reduces the frameworks which fit. All of the remaining frameworks require active choice (as far as I understand).

Often it just comes down to pure metrics for them. If it increases engagement metrics, keep it. If it decreases those numbers, drop or revise. Local maxima optimization.

One major problem with this: It's not like he "invented" the retweet. The userbase invented the "RT" prefixed message. Twitter the company simply took what the user base had done and weaponized it and empowered it. But at the same time they made it easier to track a single specific message being RT'd. So blessing and a curse.

The article specifically argues that the retweet button causes problems that manual retweeting didn't, and so he did invent the thing that's actually a problem.

Yea the act of copying and pasting to do a retweet required just enough extra work that people had to think about what they were typing, and so users used it in a less reactionary way.

It's kind of incredible that a tweak to one small feature could have such a large behavioral impact.

I don't actually believe that, though. I think they're confusing the effects that rapid growth and their own cowardice in refusing to slow that growth by kicking bad actors off the platform had with the effects of adding a button to do what users were doing already.

In the context of "reducing friction" to improve conversion from click-through advertising it's not unexpected that making it easier to retweet results in more people retweeting stuff without thinking it through.

I distinctly remember people typing "RT @whoever some fake tweet they never said" as a joke or to be purposefully misleading. It solved a real problem with the old fake RT system.

There's a really solvable answer to that, though: ban people who do it. Enforce some controls on your platform.

But the search for more money continues...

I think I understand that argument, but I'm not convinced that it's really all that true or that the actual button being added to Twitter was all that consequential.

Is a manual task like copy/paste and writing quotemarks really enough "friction" to reduce what seems like a pretty natural human drive on social media? Would doing it over and over again not make it just as "automatic" as a single button push for power users, even if it's not as convenient?

We also don't have the benefit of looking at the counterfactual situation where the retweet button wasn't implemented.

Maybe keeping manual retweeting would just result in a worse kind of "dunking" that had the extra dynamic of the retweeters subtly tweaking the wording of the quoted tweet.

Besides, if Twitter didn't implement the retweet button then the hoards of 3rd-party apps or some Twitter alternative probably would anyway.

The article says Twitter saw a lot more retweeting after implementing the button. (“After the retweet button debuted, Wetherell was struck by how effectively it spread information. “It did a lot of what it was designed to do,” he said. “It had a force multiplier that other things didn’t have.”)

It seems like they saw an immediate change in behavior and results, enough to conclude the button directly affected behavior. My anecdotal experience was similar. Though I agree we will never know how effective would have been the third party tools or how much the delay in third-vs-first-party development would have affected Twitter’s growth.

> The article specifically argues that the retweet button causes problems that manual retweeting didn't, and so he did invent the thing that's actually a problem.

Did he? Twitter clients already had one-click retweet functionality by the time Twitter subsumed/operationalized it, IIRC.

Didn't the button appear in most third party clients long before?

"weaponized it" This is foolish language to use. A retweets is actually super helpful and provides value. However, like with all tools, it matters what you use it for.

These days it seems like most people forget that there is a Twitter that exist outside politics. There are lots of communities on Twitter that provides tremendous value to those who are part of them.

> what you use it for

In some cases it makes it easier to cause harm, so weaponize seems appropriate (turning a non-weapon into a weapon)...

I found this to be a rather refreshing take on the traditional "tech people didn't consider the consequences" - it doesn't place blame on the tech person, and reflects thought and awareness before and after. (It's easy in hindsight to say "people suck, so this is a terrible idea", but far harder to do in advance).

Left unanswered is what do about it - the implied "require some effort" isn't likely to be successful, as the incentives to make things easier is there, and the incentive to promote long term civility is...not.

For a long time, I've made the argument that the First and Second Amendments of the US Constitution are indistinguishable from each other.

Previously, when the government was attacking encryption, I made the argument that this was gun control for software. "What you have is too powerful. No one needs 'military grade' encryption. Some people are misusing it, and we need the government to control it. No one's coming for your encryption, we just want key escrow." Of course, encryption is not only a protection for banking, it's a protection for free speech.

I agree that words can be a "weapon"--but in the same way that a scalpel can be a weapon. It depends how you use it. It's frightening that people are now starting to apply the "gun control" thought process to "retweets".

Before anyone jumps down my throat with "but Twitter isn't bound by the constitution", no duh. But regardless whether it's a company or a government, hearing someone with a lot of power advocate for suppressing freedom of expression isn't a great thing.

Whatever happened to the "Free Speech Wing" of the "Free Speech Party"?

So tired of these "OMG What Did I Create?" stories from social media devs. How about you grow a conscience _before_ you build these things? I have told managers to go f themselves for: Content that would demean people Lying to or misleading customers *Hosing over my fellow engineers ... among other things

I don't think it's that special of a thing to do but reading these things makes it seem like it is.

The problem is idealism.

> “I was very excited about the opportunity that Twitter represented,” Wetherell said, noting that he initially felt the retweet button would elevate voices from underrepresented communities.

The problem is that it didn't register to him, or anyone, that "racists" were an "underrepresented community", as were sexists, misogynists, homophobes, anti-intellectuals, quacks, and so on. When attempting to give a voice to minorities, they gave a voice to all "minorities", including people who had minority opinions. The idealism came in the form of expecting that these tools will, by default, be used for good... it wouldn't surprise me that people in the tech community, who have often come from privilege or at the very least live within privileged bubbles, are naive about what people are really capable of.

"A rising tide lifts all boats..." unfortunately some of those ships are full of pirates and disease.

It's sad, but you really need think about these things from a jaded and cynical perspective when building them, it's unfortunate that there's been a long crusade against cynicism. Being optimistic and feeling optimism about the world and humanity leads to problems like this.

Well said. In addition to simply being naive, I think there is also the self-delusion that everyone subscribes to when they convince themselves that their job (no matter what it is) is somehow helping contribute something good to the world. I had a relative that worked as a telemarketer selling predatory debt consolidation packages to people on the verge of bankruptcy and when he described what he was doing, he enthusiastically framed it like he was helping the people from going bankrupt, even though he was just helping them dig a bigger hole.

If your company is a public corporation, your only purpose is to maximize shareholder value.

I would go less with blaming optimism and more the mirrored bubble of the internet itself. Everyone out there is just like me, right?

As someone who's neither a heavy social media user[1] nor an engineer on one of these projects, I think this perspective doesn't really make a lot of sense in context. The consensus that social media is a horribly damaging drug that's destroying our democracy is quite recent.

Someone, a decade ago, decided to slightly lower the friction of the already-existing practice of retweeting by formalizing it as a feature. Expecting them to have the clairvoyance necessary to interpret that as an unethical act is beyond insane. The same is true of pretty much everything else on the product side of social media companies: you'd have to have a much keener, more cynical sense than average of how deeply horrible most people are in order to predict that social media would've evolved the way it has. "Not internalizing the fact that most people are mostly shitty people" isn't really a reasonable bar to tar people in knowingly being unethical. The new public square and its ability to unleash unfiltered humanity is _sui generis_, so nobody has a schema for how to handle this. The closest you come to people who've grappled with this question are people like The Founding Fathers.

(and to be clear, I'm no fan of these companies: I explicitly avoiding applying to Facebook and Twitter out of college because I either didn't agree with their ethics or thought itthey were shit at product. My opinion hasn't changed, but I don't think it's valuable to look back and place unreasonable standards of clairvoyance onto employees).

[1] I suppose depending on how broad your definition is, I use HN fairly regularly, but the frequency and level of interaction on a site like this seems categorically different to the kinds of social media usage you're talking about when you say "grow a conscience before building them".

So, had you been in his position about to join Twitter or working for Twitter and about to build the RT button, you believe you would have foreseen its effects and refused to participate on it based on your conscience?

Did anyone actually predict in Twitter’s early days that adding an RT button would lead to it being an “anger-based platform”? What makes you think you would have?

I wonder if social media is a level one or a level two chaotic system, ideas that are separated based on how the system responds to prediction. Level two means that the system responds, but level one, the weather as an example, doesn't respond to prediction.

I don't think that it requires a lack of conscience to end up creating a dangerous behemoth in a level two chaotic system. We couldn't have necessarily have known that social media would lead to the ecosystem that we have today.

When I look at social media and the advertising ecosystem around it, I feel like the major flaw is the fact that social media companies must get eyes on the page, that they must supply value, and one of the requirements to meet this is that you simply cannot accomplish what is required if information is supplied as a 'dumb pipe' to the users. Users must be convinced that they are part of specific communities while online in order to keep them on site, so the dumb pipe is replaced with a 'feed' of some sort, which is nothing less than human experimentation. I think that twitter does this by building large emotional groups via these quick sharing mechanisms, which isn't necessarily bad, but the fact that everyone is informed of them as they are going on is, again, human experimentation.

I think that the social media fix is to reduce all of them to dumb pipes, because anything more than that is a huge risk. They should probably have things that fight spam, but that's a security issue, not a human experimentation one.

When Twitter was first becoming big, the most important story was how Twitter was empowering the Arab Spring and giving a voice to the voiceless. The retweet button was a part of that; it allowed people to quickly and easily share vital information.

The world was kind of taken off-guard by the fact that this relatively tiny, fail-whale-laden platform could enable freedom and resistance against totalitarian governments. We, and by "we" I mean the collective internet, didn't really foresee that same platform becoming a platform for hatred and disinformation, or a tool of those same totalitarian governments.

Maybe we should have, but there's a problem with that: hindsight is easy, but proactively thinking like a shithead is difficult. Creating the retweet button isn't a matter of conscience, it's a matter of not foreseeing the shitty things shitty people would do with that tool.

I would just like to understand the mental process that evolves a conscience from "This is probably fine" to "That’s what I think we actually did [just handed a 4-year-old a loaded weapon]". In my moral calculus giving a 4-year-old a loaded weapon is pretty damn distinct from implementing a social media button for teens and older which among its negative aspects facilitates cyber bullying or other negative social behavior. With people equating the two it's probably not even that facetious to write a cliche that goes like "first they came for my retweet button, then they came for my 2nd amendment". (Edit: I did think of one mental process that's not just the easy "say whatever in front of the media" reason, but I don't think the guy is making a more interesting consequentialist dust-specks vs. torture type of claim.)

Have to normalize that into utilitons first. :)

Giving one 4 y/o a gun is pretty bad. Giving one teen a retweet button is much less bad. Giving _all_ teens and teen-minded people retweet buttons is as bad as a sum of all negative (and positive) consequences.

Once you do that you might reasonably arrive at equivalence.

How could you have known in 2009 how the dynamics of twitter would evolve over the next decade? There's no way. I've used twitter since 2008 and if you told me then that 7 years later that the US presidents would be using it as their primary communication platform with the public I wouldn't believe you.

I feel like this article covered that fairly well. The developers had no idea that this simple feature addition (which did nothing more than formalize what users were already doing) would result in what we have today.

Maybe you see further than others, but the rest of us are subject to the law of unintended consequences.

Ironically it's things like Twitter that has popularised these types of public displays of confession of guilt in the court of public opinion. Maybe it's sensible to forsee the mob attacking you by apologizing before they get you.

I think the real problem is Twitter refuses to make sophisticated filtering available to users. Simple mute, block, and keyword filtering are inadequate.

I was wondering today why Facebook et al. don't break their share functions into emotionally binned categories.

"Share because it makes me angry" etc

Seems like they'd get amazing whole text training sets, as well as a signal to better guide their platforms (e.g. under-weighting negative emotions in share velocity).

I wonder if it's already possible to do this without a new button.

If a lot of people close to you in your social network react "angry" to something and you choose to share it, there's a good chance you're sharing it because it makes you angry too.

Graphs are cool

They're designing for engagement and/or revenues, not for outcome; we (hopefully) don't raise our children to be as hyper as possible as a leading metric.

People would quickly realize that if they share it under a negative category their share velocity would be hit. A better approach might just be to sentiment-detect negative emotion in Tweets and adjust share velocity accordingly.

that's cute that you're assuming they would want to do that.

Good filtering has traditionally been one of the killer features of userscripts for enhancing imageboard experiences. Things like filtering by image hash, regular expressions and recursive filtering (also filter content that engages with filtered content; repeat) are pretty powerful tools for fighting abuse or, at minimum, making the daily grind not quite so terrible. Stuff like that is generally accomplishable client-side, so as a non-Twitter user I'm a bit curious why this isn't already a widely used thing.

Makes me want to mess around with a browser extension that uses perceptual hashing to filter smug frog images or whatever.

I've found keyword filtering to be very effective, but they limit your filter list to 200 keywords and they don't apply to ads (so I can't block ads about blockchain stuff).

They don't even have to be that sophisticated: just me automatically block accounts with no phone number and/or were created recently.

You can do that for 'no phone no.' accounts, people can still reply to you but you'll see no notification

Great, didn't know that. Thanks

There seem to be a growing idea in Silicon Valley minds that we (IT crowd, devs, techies, managers) are in fact great! Way smarter then those stupid peons around us which use our products. So logically we have a mission to bring those unwashed morons to the bright side, and we must manipulate them for their own good.

While it's certainly pleasant illusion for us, there are two glowing problems:

1. We are not necessary better (neither morally, nor intellectually) then other people, it's just a current importance of IT economy which makes us a bit more successful, and and sometimes gives us some sort of influence.

2. There's an empirical evidence of activist journalism which suggests the only achievable result on this way is loss of respect, and growing animosity.

Retweets function to spread messages, but the quote tweet (combined with the retweet) is the ultimate hate/shame tool. I rarely see it use outside of "hey universe look what this idiot once said".

Been saying for years that the Share button is what ruined Facebook. I went there to see what was up with people I know, not to get news or Memes.

The problem now is that FB makes a lot of money via sharing paid content so they won't get rid of it.

The mainstream media has weaponized the use of the word weaponized

I think it's funny, this developer is calling his users 4-Year-Olds. Reminds me when we used to call them lusers.

So -- remove the button.

No? Why not?

Maybe it's not actually a loaded gun in the hands of a 4 year old.

Because people will go back to the RT: convention and Twitter won't be able to sell the analytics as easily.

Maybe they will finally get rid of the like acting like a retweet, ah the good ole' days.

Fta: "A full rollback of the share and retweet buttons is unrealistic, and Wetherell doesn’t believe it’s a good idea. Were these buttons universally disabled, he said, people could pay users with large audiences to get their message out, giving them disproportionate power."

Now can someone unpack that for me?

The guy in the article doesn't even work at Twitter. He's not in a position to remove it.

And he probably was not even when he was working at Twitter.

Maybe other people work at twitter who could remove it? Questions was why not just remove it.

The people who are in a position to remove it aren't the ones who said it's like handing a loaded weapon to a 4-year-old, so they don't have the reason to remove it that the question was referring to.

They have a more nuanced view and are quoted in the article saying that they're looking into how to modify the interface to "encourage more consideration before spread".

Answer: Stockholders

Because users like retweeting content.

There might be a way to fix the problem that also doesn't cost Twitter money.

Twitter was not made for serious, challenging conversation from the beginning. Every feature it has is designed for short, sarcastic tweets or replies. It's almost hardcoded to the product. Possibly that's why politicians love it.

There is a much larger problem here than RTs, and that is lack of journalistic integrity.

You can spread pretty much any sort of information without research or even a logical basis. Weirdly, we tend to default to accepting it as valid rather than questioning it.

The only realistic cure for this, outside of draconian authority, is to teach people how to be more skeptical and educate them on logical fallacies.

These are things that should be taught from elementary school and onwards IMHO.

So I don't remember this well enough to find a link, but some time ago there was a viral clip of someone tripping on the subway steps in NY. They drop all their stuff, look like a dork, everyone's fine, whatever. But then someone takes a longer look at security camera footage from the station (or someone independent sets up a camera, I forget), and finds that lots of people trip on that particular step. Turns out it's slightly taller than the others, so it's easy to catch your foot on it.

One possible solution is to convince everyone individually to pay really close attention to their feet all the time. Or you could fix the steps.

No one had to go around and convince people to be shitty to each other on the internet. A structural change made it easier and more rewarding, and our dumb monkey brains went wild. If your solution is to change the dumb monkey brain part and not the structural part, you're gonna have a bad time.

Knowledge of fallacies doesn't help with the tendency of outlets to bury stories they don't like, so it's not a silver bullet.

But it would help. I think you need to build the tools in such a way that education becomes intrinsic to using them. If there were the ability to mark posts or tweets such that you can see "does not follow" or "citation needed" on the specific phrases that would help. Assuming you can do this in such a way that it isn't used to spam.

Did they know it was a weapon? Would it be a weapon if twitter wasn't popular?

The same thing could be said of every tech product back to the sharpened rock.

Retweet is like gossip but easier. There are people who are wired to enjoy gossiping. They can have this stupidity.

On the other hand I prefer to avoid Twitter all together and get my news from actual journalism.


> Gamergate as a grassroots, distributed movement was very interesting and it's sadly not surprising that "the powers that be" want to reign back in control of the narrative.

Ah yes, distributed grassroots misogyny. If only the """powers that be""" hadn't reigned back control of this vital, important movement to harass women online.

Gamergate was a harassment movement aimed at women in videogames that had some very smart people thinking of justifications and excuses to protect it in public.

That's conflating the actions of trolls with people that were critical of activists. Those are not at all the same thing. The former is NOT ok, the latter is ok and should result in healthy debate.

But because Twitter doesn't like nuance, those get collapsed into a single, simple and most outrage-inducing format. A black & white hero/villain narrative. Which also avoids having to deal with any critiques, regardless of validity.

That ambiguity was a deliberate strategy of Gamergate (and still is a deliberate strategy even today of those with similar disposition).

Is it truly possible to know if this was a deliberate strategy of Gamergate? I’ve heard this statement before and when I try to politely challenge it...the results aren’t as kind in return.

Hoping that isn’t the case here, because I am perhaps inappropriately curious if an answer exists.

Yes, it's beyond possible. It's a certainty. Contemporaneous logs[0] revealed the deliberate efforts of mysoginistic trolls to pretend their motivations were more palatable.

[0] https://arstechnica.com/gaming/2014/09/new-chat-logs-show-ho...

Trolls always try to wrap their attacks in semi-agreeable contexts so it doesn't get immediately dismissed and can enter mainstream channels. That's the whole idea. Similar to how activists try to wrap everyone who disagrees with them as mere sexists/racists/etc to give their arguments weight and credibility in those same channels - usually using the troll's purposefully provocative behaviour as their evidence and justification.

It's an increasingly popular phenomenon that we need to be conscious of as a culture and not give credence to the trolls while also not using examples from the fringe to pigeonhole the whole.

Interesting. Proceeding to read your link this moment

I admit-despite being someone who plays a lot of video games quite frequently I didn't wade into these waters because from the sidelines it looked like a lot of people painting things with very wide brushes and a lot of static noise that made trying to sift through any of it more annoying than it was worth.

Do you realize you are employing typical defence of any conspiracy theory when facing the fact that an alleged conspiracy doesn't look consistent to be real? And I mean absolutely literally. That exact statement I heard from people who believe in ZOG.

You've in fact heard lots of reasonable sentences from conspiracy theorists, if you replace the nouns. I think it's less reasonable to believe in ZOG than to have the opinion that Gamergate appeared to be more about childish mockery and hatred than the more weighty concept of journalistic integrity. By the logic you're implying, we can't accept that anything is any more or less than exactly what it's most well-spoken proponents try to present it as.

A conspiracy theorist can say reasonable things, however this exact thing is a circular argument: conspiracy exists but our proof doesn't look convincing exactly because it is conspiracy.

There are no generalizations in my comment which you implied.

We know gamergate was a conspiracy because someone found the IRC channel where the conspirators were conspiring.

The proof of which existed only in form of a picture posted by 'someone' who (mildly speaking) was deeply involved in the controversy.

It's not just gamergate that uses that ambiguity -- think of how the "okay" symbol ("o" formed with thumb and first finger, remaining three extended upward) has been repurposed as a white power gesture; all of the group photos of proud boys and other white supremacist groups now have them all holding up that distinctive "okay" hand gesture, because it's a call out to their values and to the people that share them.

But when the 4channers were first starting to use it this way, it could be dismissed as "what, it's just an okay symbol"...

The ambiguity ("it's just a joke, it's just this, it's just that") is part of their discourse.

Yes, that's what I was alluding to.

The proud boys are a strange white power group when they have prominent black members.

I'm glad you didn't dispute that they're a white power group; for interested readers who waded this far into the comments, here are some useful links:

* https://www.adl.org/resources/backgrounders/proud-boys * https://www.splcenter.org/fighting-hate/extremist-files/grou...

Obviously I was disputing it.

You do know that Gavin McInnes wife is an Indian right and he started the proud boys as a joke? Do do know that right?

I wouldn't believe everything you read from those sources. The SPLC just had to apologise and pay 3.3 million to Maajid Nawas because they called him Islamophobic and he is a Muslim. Maajid Nawas was suing them for deformation in the UK.


If you'd like to post anything refuting the fact that the Proud Boys are a white supremacist group, please do so.

None of these statements: 1) they have non-white members so they can't be, 2) the founder has a non-white wife, 3) the SPL Center was successfully sued for unrelated reasons by someone unrelated to the Proud Boys, or 3) "it started as a joke" refute the claim.

No sorry this is backwards. I don't have to prove anything. I wasn't making a claim they were (and it is quite a serious claim).

As far as I am concerned there is no evidence other than spurious claims from the SPLC and the ADL.

I gave you evidence that the SPLC throws the racism (Islamophobia is a very similar accusation) around too liberally and because Maajid Nawas is in the UK that has tighter defamation laws he was able sue them for a substantial amount of money. If the SPLC doesn't do basic research about Maajid Nawas, how can you trust they have done their research about the Proud Boys? The answer is you can't.

And sorry those statements do somewhat refute the claim. White-supremacists wouldn't have non-white people in their movement. White-supremacists wouldn't have children with non-whites (they call those people race-traitors). The movement was started as a joke (Gavin McInnes was on Joe Rogan Podcast 920 & 710 I believe it was one of these appearances where he specifically coined the term, but it could be an earlier appearance on his show).

So I can back up everything I've said. All I've seen presented so far from you is smear pieces by organisations that even Noam Chomsky has criticised in the past.

There is the possibility of conflating between members who are hateful trolls vs members who are honestly critical, and there is the possibility of any given movement being legitimately defined by hatefulness, by sheer membership and public expression alone. It's hard to draw a line between those two things, but I think, personally, based on tons of experience in the places Gamergaters were most prevalent, that Gamergate lay on the hateful side of that imaginary line, and in fact was not even particularly close to the line.

No it wasn't. I actually looked into the whole saga as I kept on hearing about it (back in 2015ish).

From what I could gleen there were some individual women that said disparaging remarks about a whole group of individuals and were surprised when they got push back from the rest of the community which btw included women.

I've noticed more generally that one person will say something quite rude about a whole group of people, people quite rightly tell them to shove it and leave them alone and then that individual claims harassment.

Then you didn't look at it that deep because it's generally agreed that what started the saga was a jilted boyfriend of a woman who created a video game accused a publication of giving her game a favorable review because the reviewer slept with her.

The entire accusation is bunk because it wasn't even the person accused of that didn't even review the game.

I looked plenty deep enough. That series of events is a very very dishonest and incomplete attempt at re-framing the whole issue and all the people involved.

I have very little to do generally with games, but I've had a similar situation (but at a much smaller scale) where people have claimed bullying (no such thing happened) after basically upsetting a large number of the group.

I am also old enough and seen enough scandals play out similarly in other spheres of the media. I have actually been at an event and the media mis-represented what happened. So I wouldn't trust a journalist as far as I can throw them.

That might be how the activists like to spin it but it became about much more than that when it blew up and went viral. It was really about an entire established lively and chaotic culture developed over multiple decades that people adored and got defensive over. So some heavy handed top down attempts to manually change it didn’t go over well.

Then the villainization of anyone who disagreed with the activists approach as women-haters who think it’s okay to harass women only increased backlash.

Then the villainization of anyone who disagreed with this approach as people who simply think it’s okay to harass women only increased backlash.

Is it recency bias on my part to say it feels like this tactic is showing up in more places than it deserves to? The last American election cycle was terrifying to watch for precisely what you just described happening all across Twitter and elsewhere on social media.

Assent with {group_A} or be labelled complicit with {group_B} by default with little headroom for nuance or context.

> Is it recency bias on my part to say it feels like this tactic is showing up in more places than it deserves to?

No, but it is recency bias to think that it showing up in more places than it deserves to is novel. It has been an exceedingly common tactic everywhere for probably as long as civilization, or maybe language, has existed; it manifests in moral/religious/philosophical dualism, for instance, which is extremely widespread and ancient.

It’s a tactic that was perfect during the course of the last election. Which of course existed well before that but now it’s been refined for the internet and social media age.

I’m not looking forward to the next election.

Gamergate was an INSANELY complex situation, involving thousands of actors with differing and intentionally unclear motivations. It was eventually labeled as a "movement" by the opposition, not by the "members" of the "movement", in order to paint a picture of their persecution, whether valid or invalid. They wanted a platform. They were given one. And they exploited it.

I mean, Jesus, just read the wikipedia article.

> Frank Lantz of NYU's Game Center wrote that he could not find "a single explanation of a coherent Gamergate position". Christopher Grant, editor-in-chief of Polygon, told the Columbia Journalism Review: "The closest thing we've been able to divine is that it's noise. It's chaos [...] all you can do is find patterns. And ultimately Gamergate will be defined—I think has been defined—by some of its basest elements."

That's because there Was No Movement! The pattern recognition centers in human brains go beyond confusion when presented with a situation like this. You know how whenever Anonymous comes up in the news, it's always labeled "the online hacker group Anonymous"? It's utter insanity. It's not a group. It's not a movement. They don't have weekly meetings down at the VFA. It's just an asshole with a computer. And, in the case of Gamergate, there was a cultural zeitgeist involved. A thousand people send disgusting messages on Twitter; our brains have to believe there's a pattern, that there's organization. But what if there isn't? Because, well, there wasn't with Gamergate. It was just mob mentality.

Frankly, we're all better off just putting both sides of that whole shitstorm behind us, like most internet food fights. The assholes who harassed people were, well, assholes. The women they harassed didn't deserve what happened to them, but they also handled it very poorly. Streisand Effect hits. Now Anonymous has a spotlight, and you know they're going to abuse that. It's no longer about what they actually believe. Now, it's about Chaos. Because Chaos is Fun. You call them sexist. You call them bigots. They laugh. They don't care what you call them. But: You care what they're calling you. Now, they're in control. They have the power. You gave it to them.

Afterward, everyone rational is trying to figure out what happened, trying to derive a pattern and point fingers. It has to be someone's fault; we need content for this clickbait we're writing about it. The people involved are still laughing, because they're STILL controlling you. Just stop. Don't engage terrorists.

> Don't engage terrorists.

Right. Deplatform them, ban them, and prosecute them instead.

> People want to retweet

People want to gamble, too, and to drink alcohol, and to shoot heroin. That doesn't mean those things are good for them, or that it's axiomatically ideal either for them individually or for society at large to not have any restrictions on their ability to satisfy those desires.

Historically, though, the restrictions placed in pursuit of your proposed ideal have caused enormous suffering and destroyed entire communities through mass incarceration, corruption, and gang/cartel violence fueled by black market money. Attempting to stamp out negative desires by decree through large-scale force has plausibly caused far more damage to society than simply accepting that a small fraction of people will burn up their lives, smash in some car windows, and eventually kill themselves in pursuit of their vice. And the former problem hasn't even replaced the latter, now we just have both.

Are you equating retweets to using heroin?

The difference is that lowering friction to perform an action has caused an exponential increase in its usage... and it’s so associated with cognitive bias and impulsive thinking that it’s allowed a lot of false information to spread quickly.

I say let people go back to the manual RT

Was there really that big of a different between “via @user” and “RT” and the retweet button?

User interaction drops dramatically with each extra click required -- I don't think it's hard to extrapolate that an action that involves at least two additional clicks (select text than focus the input), plus keying in text, pasting text, and reformatting pasted text to fit fewer available characters would greatly drop the rate of retweets.

I think similar mechanism explains why some diseases blow up into epidemic and some fizzle out. Epidemiologists have all the hard math on just how much friction is needed.

Psychologically speaking, I always felt the original RT kind of forced the RTer to put the words into their own mouth.

Taking something that required thought and typing and turning it into a one-click, no-thought-required operation? Absolutely.

Jeff Bezos certainly thinks so -- why else would he have bothered patenting one-click buying?

I always roll my eyes when HN falls for interviews with "random engineer or product person responsible for [feature]", as if that automatically confers authority like they're Richard Stallman or Tim Berners-Lee.

Combining the fact that they have nothing insightful to say with the ego boost from being interviewed in visible articles means that people like this are going to say whatever they think will provide a punchy, clickbait headline for the less-discerning members of link boards to feed their biases with.

The only notable thing here is how hard HN falls for drivel like this.

For me, the motivation is that it's interesting to see someone grapple with the consequences of their work.

This guy is clearly no guru. And the sensationalism of the title is just media being media.

Yea, I think that's definitely an interesting thing to see in action. But what I'm pushing back against is the notion that this person's perspective is all that valuable: he doesn't even appear to have been that heavily involved in the feature. The point is that _Buzzfeed knows that_, and they know that this is just a lazy piece for a slow news day, that will be clicked on and shared by the more thoughtless parts of the Internet.

I definitely wouldn't be making this comment were the article about @jack, or someone else that Buzzfeed could make the case was responsible for the design and development of Twitter early and over time.

Why do you say "HN" is falling for it? Hacker News did not write the article. Buzzfeed did. We're simply commenting on it.

I'm not sure I understand this comment at all.

Buzzfeed is the one writing the clickbait. Obviously they would not be the ones falling for anything. HN, collectively, put this article on the front page, treating shallow nonsense as if it's notable because it feeds our biases. It's a data point that speaks ill of the collective discernment abilities of HN (and the quality of the community) that we put content-free clickbait on the front page and discuss it as if it's valuable (or more accurately, use it as a fig leaf for soapboxing preexisting views).

This isn't some generalized "HN is full of dumb low-effort takes and clickbait" complaint; I got used to that years and years ago. But part of being on an online forum is pushing back against stuff like this and calling out low quality content when we see it.

The issue isn't the behaviour, it's the moderation - or for a better word, parenting, that happens or doesn't happen afterward. It will come down to platforms that govern well, fairly vs. those that don't.

I'm sure some percentage of Gamergate members were honest, but it would be naive to think the movement wasn't characterized by a dramatic majority of simple hatred.

i find it hard to believe that people think it's as simple as gamers just plain hating women for no reason, but it seems to be repeated all the time

A lot of the people you're referring to don't think it's "as simple" as "just plain hating" for "no reason". Reality is more complex, but it's a pretty reasonable opinion, based on observation and direct experience, that a lot of Gamergate revolved around calling any women involved whores, calling any men involved cucks, and calling any Jews involved kikes. I'm not sure how it's possible to expect people to not get from there to the feeling that one of the primary front-facing attributes of Gamergate was hatred.

I have no feelings about gamergate in particular, but ive seen this applied to enough cases of lies enshrined into the mainstream consensus (when even a quick Wikipedia search provides a more balanced view): people are far stupider and more dishonest than you're probably calibrated for.

People suck, and when you let the suckiest people have the same level of audience as less sucky people, you've got a problem.

Buzzfeed’s entire business was built on clickbait.


> Tu quoque (Latin for "you also"), or the appeal to hypocrisy, is a fallacy that intends to discredit the opponent's argument by asserting the opponent's failure to act consistently in accordance with its conclusion(s).

How exactly is pointing out someone's hypocrisy not a legitimate way to discredit them?

"You don't live up to your own argument!" is an attack on the person, not the argument -- a form of ad hominem. Just because someone doesn't take their own advice doesn't mean that the advice itself is unsound.

As the old saying goes, even a stopped clock is right twice a day.

How is it not a reflection on the argument? If I say something is good advice but don't follow it, I clearly don't believe it myself in every respect. That seems highly relevant.

Again, that says something about you, not about the argument.

I say people should get some exercise every day. In practice, however, I only get to the gym once or twice a week. That means I'm a hypocrite (and a lazy one at that), but it doesn't mean people shouldn't get some exercise every day.

But it says you don't believe your own advice doesn't it? If you believed it, you'd exercise despite your laziness. Or perhaps it would be more accurate to say you believe a counterfactual more. I'm pretty confident about the effects of exercise, but we're talking here about hypothetical difficult questions. If even the person who supposedly knows the most about a particular argument isn't convinced by it, how can that not reflect poorly on the argument?

But even if I accepted that pointing out hypocrisy is purely argument by ad hominem, I don't see how it would be irrelevant. If someone believes something radically different from me and therefore works towards radically different outcomes, I consciously try to stay open minded in case they know something I don't. If someone isn't consistent in what they profess and what they work towards, I don't feel I owe them that sort of hearing. At best, their ideas are good but they don't understand them well enough to represent them. Would you really call that unfair?

Also a guy who did major pioneering work on infinite scrolling https://www.bbc.com/news/technology-44640959

Hyperbole. Social media is not broken. The retweet is not why. And this guy didn't invent the idea of republishing content. Other microblog services had that, and I'm pretty sure Twitter users were doing it on their own.

It dropped the barrier to republishing to nearly zero, to where impulse becomes the driver.

So I'm curious what you think is the problem, or if there is one - and if so, what your proposed solution would be?

People create their own problems out of nothing. People make mountains out of mole hills. Twitter is one of those problems. Twitter is one of those mountains.

So you haven't thought very deeply into this? Those statements don't tell me anything - I can say there's the colour white, but that's actually made up of a spectrum of colours. I ask because if you haven't thought deeply into it, how are you confident that your macro statements are accurate?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact