Hacker News new | comments | show | ask | jobs | submit login

If you are sincerely interested in quashing abuse, and if the risk of being laid off does not frighten you, forget about the numbers and let’s talk about quashing abuse.

Twitter gets extremely mixed reviews from people who are the targets of abuse, and I believe I am putting that conservatively. So, what I would ask is not whether they are going to lay me off because they run out of money, but whether I am going to quit because when I get inside, I discover that they are not going to actually do much about it.

If Twitter has had a come-to-jesus moment about abuse, and there are no structural obstacles to doing something about abuse, this could be a job where you will one day look back and say, “I was part of the team that turned the corner on Twitter’s biggest problem. I made a difference.”

On the other hand, if Twitter doesn’t have quashing abuse in its cultural DNA, or if there are deep structural obstacles to quashing abuse, then you may discover that you cannot actually make a difference. That can be soul-crushing if you are passionate about the work.

I am not making a claim one way or the other about where Twitter is with this, I’m just suggesting that if you are motivated by making a difference, the biggest thing to figure out is whether you will actually be able to make a difference.

IMO, this matters more than the financial risk.




Wow, this is up there as one of the best responses I've read on HN.

Abuse is a huge problem at Twitter so being part of the team fixing it is a huge opportunity to "move the needle" as they say, and yet drama is historically a views/comment getter so some factions of the company might actually want to keep it around for its 'metrics'.

This is one of the biggest challenges at modern "web" companies, there is the outward facing position on a "problem" and then there some folks who are extracting value out of the problem so they only pay it lip service rather than work on it. Perhaps the canonical example was Tumblr's "porn" problem. It was a huge "problem" that Tumblr blogs were just image streams of contributed porn, and it was "great" that Tumblr was adding so many new blogs a month and getting so many unique visits Etc. So the metrics on which Tumblr was getting valued were being generated by content that was a problem, see the conflict? As a result any real efforts in engineering (according to stories, I don't know how true they are) at suppressing porn were quietly shelved.

So from an interviewing perspective, for a team that is going to address the "problem", you need to ascertain if the company wants a group which makes a lot of noise but doesn't change the status quo very much, or if the company wants to actually fix the problem. Depending on what kind of person you are, that will tell you if the opportunity is a good fit or not.


"Porn" problem, or porn "problem"? I find it hilarious that companies like Reddit or Imgur shun porn as beneath them yet the gigantic elephant in the room is that many, many, users come to their sites purely for porn.


Or for completeness "porn problem" :-) But you illustrate the point beautifully that there is the "story" about what a site is all about, and there are the "metrics" that show how successful a site is. And then there is the actual site usage by its users. Marketing would have you believe it is their "story" that is driving their "metrics" but they let the reader make that connection without saying it out loud. Gives them deniability later :-)


Mainstream advertisers don't like porn.


In public


Thanks for mentioning this. You've hit the question I've been focusing on most. I've been a Twitter user for more than 10 years at this point, and for a long time they weren't as serious about abuse as I wanted. Not even close. But I think this time might be different.

I want to be careful to respect my NDA here, but I think I can fairly say that a) I was totally skeptical when I first started talking to them, and b) I now think there is a lot of reason for hope.

Private stuff aside, I think there are a couple of public reasons to think this may be the turning point. One is the enormous beating they've been taking in the press about this for the last year or so. The other is that their abuse problem was a big factor in them not getting offers from a couple of suitors during their attempt to sell themselves. I think it's safe to say their abuse problem cost investors literally billions of dollars. That is a pretty big learning opportunity.


"The other is that their abuse problem was a big factor in them not getting offers from a couple of suitors during their attempt to sell themselves. I think it's safe to say their abuse problem cost investors literally billions of dollars."

For me that's a huge red flashing neon sign that your efforts will be directed to what top management considers the problem to be and not necessarily what you consider the problem to be.

The reality of that situation they're looking to change the perceptions of potential buyers. That may or may not involve much that you care about. Of course they'll say the two overlap, but when push comes to shove...


>that your efforts will be directed to what top management considers the problem to be and not necessarily what you consider the problem to be.

Unless you are the management.


Bear in mind that Twitter's most abusive users aren't going to say: "Thanks for kicking us out. We were wrong. You're right. Sorry." Instead, there's every reason to think that they will fight back in ways both expected and astonishing.

You could still prevail. But these kinds of spats can get incredibly personal. You'll want bosses that understand this is coming and can ride it out. People outside of work that have your back, too.


I think you're okay...Twitter reportedly didn't get acquired almost solely because of the abuse problem the buyer would inherit (e.g., Disney), and they now should have a crystal clear valuation for how much failing to address the abuse has cost. (Approximation: it is somewhere between $6B and $20B of added market cap.)

They haven't addressed abuse in the past because the market has stupidly hitched their stock price to MAUs, which I think has caused them to take their eye off the ball and prioritize that metric over all else. Address abuse, and risk lower MAUs.

So if they've finally decided that sacrificing MAUs in the short term is worth it (focusing instead on revenue, cost control, and cleaning up the toxic abuse), they may be able to finally realize some of that discounted value.

It sounds like it could be a great opportunity at helping Twitter unleash some of that discounted value. Not the sole reason that the value will be reflected in the future market price, but a very important factor that at least gives them an option to sell out later at a good valuation if the growth in earnings is slower than expected and they feel the need to test the waters again.

Worst case, it's 100% focus on stemming abuse, they clean it up in a year, and you're working for Disney, Microsoft, Google, Amazon or some other acquirer.


The big caveat with a company that historically failed at something is...are the people involved good enough to actually solve this problem this type around given the companies historical trend? Or does it make more sense to go to another social network where abuse has been a focus for a longer time and learn from people who have been successful so far?

Turning around a ship is really hard. I've made the mistake of working at a company with a hugely insurmountable problem (Bing) having viewed it as a big opportunity but I almost certainly pursued a job at the winner (Google) considering I had no experience as a professional engineer yet and would have learned much more. On the other hand if you already are the subject matter expert, it can be a huge opportunity to bring your skill set somewhere it is desperately needed.

Not that I really know first hand social networks that are great at solving the abuse problem but this is one way to look at a career choice.


Also, turning around an aircraft carrier (e.g. a public company with thousands of employees) is a whole hell of a lot harder than turning around a tugboat (private company) or a raft (a startup with 10 people in a room).

Advice to the OP: don't be a martyr. If it looks like you can make a difference, go for it! Just don't pin your identity or sense of self on whether you succeed. The markets are a harsh mistress.


This is a hugely important point. Allow me to back it up with a little story time.

I took a job a little over a year ago at a company that I knew was struggling financially, but saw opportunity for a lot of personal growth. To give you some idea of how dysfunctional this place is, it rates a 0 / 12 on the Joel Test. There's no scale for negative, otherwise it might be negative.

After more than a year of working here, I think I've managed to get us to .5 / 12. And that's been a battle.

The biggest factor? I completely underestimated the social aspect of it. I failed to realize that, in any dysfunctional organization, the people there are either extremely tolerant of that dysfunction, or worse, worked to create it.

As I sit here writing this, my co-worker is at his desk rewriting swathes of code that I had working. We're 6 weeks past our delivery date, he says it's fixed, but he's still over there swearing, so that can't be true. He hasn't committed anything to the repo in over a month, so I have no idea what he's done (or how valid his complaints are). When he was doing this a few weeks ago, he rewrote a lot of my stuff only for it to turn out to be a small mistake in his code that was causing things to fail. I don't mind having my stuff criticized or rewritten, but logic would dictate if you write a bunch of new code and merge it in with stuff that was working, there's probably an error in the new code, or possibly the mating with the existing base. Sort of a look to the log in your own eye before you cast out the mote in your brother's.

And that's just how things go here. Projects that should take a few months have dragged on for years (before I got here). Untested changes get pushed into products. Nothing is documented. Comments in code are usually snarky quips directed at the compiler provider or Microsoft and littered with curses. And there are a million other things.

It's like that because he wants it like that, it likely won't change, and I should probably just leave.


What are we even talking about here - "quashing abuse"? Are we talking about going to work deleting spam, statementless insults, etc., literally combating people abusing the platform? Or is it about stopping people abusing other people, deleting hateful comments, and maybe censoring certain ideologies (nazis, etc.)? I'm just wondering what abuse is, and what a twitter moderator's job description even is.


I imagine "abuse" is whatever the moderator in question defines it to be.

Look at Twitter's history of curbing "abuse" and you'll find that this nebulous definition gives plenty of room for their subconscious and conscious biases to take over--a disproportionate number of the targets of censorship on Twitter are of a certain political influence.

So I see no nobility in the original commenter's goal. Taking away the voice of people you don't agree with is not noble, and censorship of any kind is unfortunate.


I have a fairly fundamentalist view of free speech, so in my opinion the only reasonable standard where censorship would be an acceptable solution to any speech is either:

a) content that violates an individual or entity's privacy; or

b) direct incitement to physical harm to an individual, group or property; or

c) to protect the platform from legal damages or other legal quandaries

Unfortunately, most codes of conduct tend to be structured around public perception, which better serves a business' long-term interests over the "killer feature" of a truly open platform, which is seen by many as more trouble than it is worth, from the perspective of the platform owner.


> a disproportionate number of the targets of censorship on Twitter are of a certain political influence.

Yeah see I don't worry so much about this. I purposely follow people with whom I disagree (but who are well spoken) on Twitter to avoid ending up in an echo chamber. So I don't disagree with you about quashing diversity; the last thing I need in my life is to preach to the choir.

But the fact remains that, 1st Amendment or not, you can't yell "Fire!" in a crowded theater and then plead "but, but, my rights!" to evade responsibility. Credible threats are assault (look it up, it's in every state's legal code). I would not be surprised if doxxing was found to fall under similar statutes by an enlightened judge if done with malice aforethought.

Furthermore, an individual's rights to free expression do not include a right to make Twitter spend money or lose opportunities due to their chosen expressions. Twitter != the government. If push comes to shove, their self-preservation must take precedence (legally, since they have a fiduciary duty to their shareholders as a public company).

It's a chestnut, to be sure, but freedom of the press applies to those who own a printing press. Until and unless Twitter is nationalized (heaven forfend), they're restricted (at worst) by common carrier provisions. They can do a good job if it's in the best interests of their shareholders (and recent events suggest that may be the case). It's really an issue of how (how to maximize shareholder value, how to improve public perception, how not to drive away ads...).


Well, let me put it this way: banning people for threatening violence against other users in the name of social justice costs Twitter opportunities due to being unacceptable to a large chunk of the tech community (and I'm not even talking about them doing it to right-wing blowhards like Milo).


And yet, here you are on a platform that does not permit its users to threaten violence against other users.

Is that costing Hacker News "opportunities?" Compared to what? The opportunities afforded by not driving away all the users for whom threats of violence make a site toxic even if you aren't on the receiving end?

And what do you mean by a "large chunk of the tech community?" Do you mean a significant proportion relative to the "tech community?" Or a significant proportion relative to Twitter's user base as a whole?

And how do you define "the tech community?" Do you mean this cosy little echo chamber we inhabit made up of startup hackers and like-minded individuals? Or do you mean everyone working in tech, including the kinds of people working for BigCo who never go near Hacker News and visit Reddit for affinity subreddits like /r/volvo?

---

All design decisions are tradeoffs. If you make one choice, you alienate a certain set of users. But if you make a different choice, you'll alienate another set of users while placating some of the first set.

So yes, there is some opportunity given up if Twitter goes around banning people who incite threats of violence (your example), but there is some other opportunity given up if it tolerates them.


I don't know if you've noticed, but Hacker News is not exactly terribly popular amongst the social justice part of the tech community. Also, users' opinions can only be influenced by threats of violence that they hear about, very few people who aren't in the immediate social circles of the offenders generally hear about these threats, and those are under social pressure to accept them for obvious reasons. Not banning people who haven't made threats of violence but whose opponents have tried to tie to unrelated people that have for political reasons has probably done far more damage to Twitter's acceptance amongst people who refuse to participate in communities where death threats are normal.


It's a small fraction of the tech community that has a problem with banning people for threatening others with violence. And the tech community is a small fraction of Twitter's userbase.

Not catering to the libertariat has never really harmed a mainstream-targeting business.


To define "abuse" we'll need to talk about expectations and norms around collective communication.

Social communication platforms (implicitly or explicitly) embody certain communication goals.

A. Striving for constructive conversation is one family of goals. These goals are valid even if fuzzy and imperfectly enforced.

B. Striving for "(almost) anything goes, up to legal limits, such as libel" is another kind of goal. This may seem safer and more ideologically comfortable to some. However, in practice, it still has large complexity and uncertainty.

Social media platforms are not necessarily responsible for promoting free speech in the same way that Democratic institutions are bound to protect individual rights. Many individuals have multiple channels for speech. They often can find a venue that works best for them.

Speaking generally -- there may be exceptions in specific cases -- organizations are not required to give every viewpoint equal airtime. Organizations are free to choose how they want to structure their environments, up to the point at which they accept money or are subject to the purview of governmental bodies.

Some people such the word "censorship" too loosely, in my view. Yes, governmental censorship is only justified with a very high and particular standard, subject to close scrutiny. However, private organizations are much freer to filter as they see fit, and I think this is justified. One church does not have to voice every viewpoint at a gathering. A magazine does not have to print every article submitted. Not every story gets to the top of Hacker News. The rules at play are, by definition, filters.

I want an online platform for constructive conversation. This requires some kind of norms and probably some kind of message "shaping" for lack of a better word. Call it filtering or summarization or topic modeling. Whatever it is, people have limited time, and to be effective, a platform has to design around that constraint. Platforms need to demonstrate excellent collective user interface design.

I think almost all "social" platforms are falling very short to the extent that they claim they are effective social communication tools. I wrote much more here: https://medium.com/@xpe/designing-effective-communication-67...


"Spam" is also a nebulous term. A million actual humans having a genuine conversation about their Nazi ideology isn't spam, in most people's definitions. But a million actual humans having that genuine conversation while at-mentioning me unequivocally makes it impossible for me to productively use Twitter the product - which is really the reason spam is bad.

And that applies to traditional spam too: if a bunch of fake Nigerian princes emailed each other and only each other, we wouldn't have a reason to get rid of it, as long as there was technical capacity to deliver those messages. The spam part is that it makes it hard for me to get the emails I want.

I might have a disagreement with Twitter's priorities if there are a million actual humans quietly discussing their Nazi ideologies in a corner and not bothering the rest of us, and Twitter prioritizes their quality of service over the rest of our quality of service. Other people might not, and that's fair. Other people might have a problem with it regardless of whether it impacts other people's service, and that's fair. But I think that such a scenario isn't what is commonly meant by "abuse".


> What are we even talking about here - "quashing abuse"?

Well, even if we don't agree on the full definition, we could start somewhere, specifically at the bottom of the barrel.

Harassment is an easy one: Death threats, threats of violence, unwanted contact with people in real life, unprompted sexual comments, threats of doxxing / swatting / hacking, relentless insults, a combination thereof etc, aren't really difficult to agree on. I don't think there's a convincing or defensible argument that someone has a right to follow someone around and bother them /directly/. Just improving this would be a massive improvement in many twitter users' lives, and twitter has been condemned many times for failing to take the simplest actions to prevent this.

Hateful ideologies discussed privately is quite different than that - I do think those still cause measurable harm and I'm against that personally, but at least I can understand why some people hold a free-speech slippery slope argument for not interfering.


Well that's part of the challenge, isn't it? What constitutes abuse?

The GP doesn't say that they're going to be a moderator, BTW. They might be a developer who would be building features that would allow each user to better determine for themselves how to manage abuse. Or tooling that would allow moderators to do a better job of managing abuse complaints.

The native feature set is pretty simple right now. A lot of people use Tweetbot for its temporary mute feature, which is not native to Twitter.


I can't imagine there is anything other than a sliding scale at work here. It's difficult to come up with hard and fast, absolute rules.

But there seem to be some easy wins - there is an increasing trend of Jewish users being on the receiving end of tweets suggesting that they belong in an oven. I've seen some tweets showing than when the user reports said tweet, Twitter replies saying that there has been no rule violation and that the user won't be suspended.

Working out a better policy for outright hate speech like that seems like a good start.


> Twitter gets extremely mixed reviews from people who are the targets of abuse

That's (as you note) putting it very lightly. It's basically unusable as a public platform if you're moderately famous. Go watch Jimmie Kimmel's bit with celebrities reading mean tweets on youtube and imagine being on the receiving end of it. The ability to lock down your account misses the point, though it does say that Twitter thinks abuse might possibly be a problem (as does comments by its CEO).

Screening out abusive users is a hard problem that hasn't been properly solved yet, anywhere in the world. An eternal September, to revive the phrase. G+/Youtube and Facebook's real name policy hasn't managed to solve it, and Facebook is only able to screen out the worst - gore and pornography, with devolving political discussions leading to many unfriendings this election season. Maybe not quite abuse, but still unpleasant.

If fighting abuse is your life's mission, then join Twitter. See what they're doing about abuse, talk to people who have tried to fight abuse for years, find out what's succeeded, what's failed. There may be some Twitter org-specific implementation barriers, but that's still a learning experience for the future. If you find some Twitter org-specific barriers that are insurmountable, get out.

My bias, here, isn't that I work for Twitter (I don't) but I doubt Twitter's inability to quash abuse has anything to do with corporate governance (other than an inability to spend infinite amounts of money on an infinite amount of moderators).

Humans are still currently the best at understanding humans, and we still do a terrible job sometimes. Computers aren't able to parse sarcasm, idioms, or subtlety, and being fed in plain text instead of speech makes it even harder as there's no vocal tone to analyze either. Twitter's 140 character limit hurts quite a bit here.

I agree that making a difference is key, but the problem itself also has to be considered - the early teething pains at Twitter in reaching "webscale" have been fixed by now (when was the last time you saw the fail whale?), but "people can be assholes to each other" is probably the oldest problem in the world.


Comparing abuse on Twitter and other platforms is like comparing Mt. Everest with Twin Peaks.

On Facebook and most other social networks, at least the subject of harassment can at least block the perpetrator, and refuse to friend an alt account. Since the primary use case for Twitter is to have public tweets and open replies/mentions, blocking a user making death threats won't solve the problem; they will just open a new account and flame you from there. On Youtube, there is less reason to look at the comments; a celebrity can focus on the number of views as a metric for success.

The reach of abusers on Twitter is far greater than on other networks. Add to that a lack of defined communities, as in Reddit, Facebook Groups, and other message boards: there are no community moderators on Twitter to react to harassment and the posting of personal information.

I see the handling of abuse on Twitter as a hard problem that will take a mix of pattern-matching algorithms for brigade detection, machine learning on tweet sentiment and content (doxxing, death threats, other TOS violations), and least scaleably, a large pool of tier 1 support staff to act on recommendations to ban users.


It's baffling that they don't require email, or better yet, phone verification before allowing you to tweet. That one simple change would make it a lot harder to spin up alts. On second thought, it's not baffling. I'm sure they're worried it would hurt their user growth numbers.


they'd probably lose 80 pct of their "user accounts". which they use as a nice inflated metric.


I'm sure their only meaningful metrics for acquisition is ad impression growth, revenue growth, and cost control. Active users is headline-grabbing but unless it translates to revenue, it's secondary.


Speaking as an ex-insider, Twitter cares very much about abuse. It also cares about free speech. The trick is enabling one without enabling the other.


Why is the block button not sufficient?


Because the block button doesn't actually block. It poorly hides the user's content from your view while other users can still see the abuse.


Why would that matter? What kind of useable communications platform prevents third parties from conversing about a separate person?


How is that harmful to the original poster? They don't see it.

Seems adequate, especially when defining a universally accepted definition of "abuse" that isn't just censorship-as-abuse itself is such a hard problem..


[flagged]


> ...libtard...

We've asked you already not to do this, so we've banned this account.



This is very good advice.


Quashing abuse means preventing people from creating dozens of throwaway accounts. Preventing people from creating dozens of throwaway accounts would immediately eliminate any growth in Twitter's userbase - it would kill one of Twitter's primary statistics.

That will never happen.

Therefore quashing abuse will never happen.

Sorry.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: