I have ~3k followers on one of my pages. Usually ~200-300 people see any given post if it has no engagements. If it has normal engagement, it might get to 1000-2000 views, still short of my follower count. If it has sparked controversy and there is a fight, I've had the views spike up over 8000-9000 without any shares. Facebook posts to your timeline "your friend commented on this" and others start piling in too. Facebook emails me saying "this post is getting more attention than 95% of the rest of your posts, please pay us money to show it to more people". The more toxic the comments, the more views it gets and the more Facebook begs me to pay them for it.
That's the problem with these algorithms that humans don't watch over. Usually it works great and good content is seen by the people who want to see it. But every now and again it goes out of control and people end up getting hurt and Facebook/Twitter profit from it and even promote it. And as the person who posted it, I have zero ways to stop it from spreading other than deleting the post.
-edit- oh another story... I run a news site for a town, lets call it Townsville. There is another Townsville in another state, but it is not my Townsville. I had a post go super viral, 90,000 views from my 3k followers, because somehow the post made it to the wrong Townsville and 87,000 people were being shown the wrong news article. Again, I had no way to stop this, no tools to correct it. Absolute insanity.
Well, duh. If people are fighting they are definitely "engaged", but not in a good way. Defining that more engagement equals more success is what made social networks so toxic. The current state that more eyeballs equals more ad money needs to change.
In my case, it's not that easy. Simply posting factual news updates is often enough to trigger wild responses. For example, a few months ago I attended a city council meeting and wrote a Facebook post during the meeting saying "City has approved a new 70 unit condo project" with a picture of the plans. It was one of the most toxic comment sections I've ever seen simply because some people disagreed with the action being taken. Not my actions, but the actions I was reporting on. The answer certainly is not to stop reporting the news.
On a side note, your comment comes dangerously close to sounding like a personal attack. Perhaps I'm reading it wrong but it seems like you're saying if you have problems on Facebook (which I and the person you're responding to have said we do), we must be pathetic and posting like edgy children seeking the toxicity we find. If that's not your intention, maybe you could clarify?
Don't be fooled.. Someone is spot checking the training data against the output at a statistically valid sampling interval.. ML in this case is the data equivalent of a limited liability corp..
because one wants to make sure ones program is operating as intended?
These companies can easily put a filter over this engagement maximization algorithm and they are choosing not to.
I, for one, welcome our mechanical overlords.
This document is: positive (+0.52) Magnitude: 1.39
And it is worth noting that if these companies could stop this with limited engagement impact, they totally would as it would get them out of the horrible political hole they are in right now.
tl;dr: “everyone i dislike is a nazi or something close” syndrome
What am I missing here? There was no harassment of any sort.
Alternative headlines could have been:
"Twitter has an algorithm that helps you gain more followers"
"Twitter has an algorithm that helps you drive awareness"
"Twitter has an algorithm that helps you get more twitter followers for your cause or business"
"Twitter has an algorithm that expands your social impact from beyond your sphere."
In other news: public posts on public site go.... public.
Anyone with a large twitter following knows roughly what the makeup of their follower base is, and they compose tweets accordingly. While always necessary to some extent, it's usually hard to contextualize every single tweet as if it could be read by anyone, so it often isn't done.
As a silly contrived example, lets say I am a software developer that focuses on operating system performance and I tweet something like "I'm working on an algorithm to make killing children an order of magnitude more efficient". (note to real twitter users: never tweet that)
My followers know I'm talking about killing child _processes_ on a computer. So they reply things like "oh, that would be great, it would make this one shell script I have a lot faster to execute" or maybe even "personally I'd rather you encouraged users to use threads rather than forking lots of processes". There might be a heated discussion, but it will be with a HUGE shared context of information.
Now the Twitter algorithm picks it up, and the tweet gets seen by lots of people who don't know anything at all about operating systems. They are, understandably, completely appalled. They start responding with anger. Threats, abuse, etc.
So, Twitter changing the dynamic from "your tweets will primarily be seen by your followers" to "your tweets will frequently be seen by your followers followers" can actually have a big impact on the platform. It will at minimum take some adjustment. Operating with the assumption of one dynamic when there is in fact the other will be...painful.
But thinking about it a bit more, it might be one of the worst ways to do so.
For example, assuming roughly that both favorites and retweets represent general agreement, using those mechanisms to surface new tweets to people makes sense. If someone you follow (and presumably respect) quote retweets someone you don't follow with "Yes this!" or something similar, then you're already primed to agree with the person you follow.
But, often at least, replying and not faving/retweeting could very well bais for DISagreement. Now instead you're going to see someone you follow and respect arguing about something, and you're primed to agree with them, and potentially pile on to the original tweet author even though you might not have cared about the topic a minute ago.
Twitter ALREADY has a way to signal that you want all your followers to see a tweet you saw: retweet. And even showing your followers things you favorited at least means they'll see things you probably like. But it seems there's at least a reasonable argument that showing your replies to your followers is setting up a situation where pile-ons to the original tweet are likely.
Perhaps not for blue checkmarks (they've declared themselves central to the public debate), but for average users Twitter should try to calm down pile ons.
But then so would the engagement and ad revenue.
I doubt it's because they can't. The more likely answer is they don't want to.
Your assumption that people more intelligent than you "should have figured this out by now" belies the very problem- no one has yet come up with a good automated solution for this. If YOU do, you'll be a millionaire.
People have become millionaires, billionaires even, for the exact opposite of what you say. You become rich by making sure controversial content is spread as far and wide as possible, because hatred and fear sell as entertainment. People get addicted to it. You don't become rich by filtering out hateful content, you become rich by enabling it and spreading it because that's what people want (as long as they're not the target).
The real problem is the incentives, both for Twitter and for people interacting on twitter. The solution is probably _social_ rather than technical, but as long as Twitter wants to keep your eyeballs on their site for as long as possible (so they can sell ads or whatever to advertisers) a whole host of solutions are going to be verboten.
By way of example, Hackernews literally has a feature to just lock you out of the site if you are using it more than you want to. That is great for us, the users. But twitter would never do such a thing.
This is not an easy problem, and it does no one any good to pretend that it is. Tackling the issue also requires those considering it to consider other social situations. Is someone supporting equal treatment of women in Saudia Arabia practicing hate speech against the conservative ruling party? If we'd had systems that let us actively regulate speech in the way we can now, would it have been appropriate to block Martin Luther King Jr. because his message was growing civil disobedience and causing families to bicker over race politics? Why are we so damn certain that any argument today will necessarily be decided by a regression rather than a wider acceptance of more progress? Change in human societies is always ugly, always comes at the cost of pain and strife, and on the balance has usually moved us in a forward direction. I can't say the same for censorship. Censorship makes impossible any forward movement, and only serves to leave regressive mindsets to fester and make-believe that they have more support than they actually do.
I see these people here trying to debate solutions like good engineers, but unless they work at Twitter, it's a waste. We can guess all day and come up with a million solutions but when it comes down to it, Twitter absolutely has the ability to control posts that spiral out of control. What they don't have is the desire to do so.
I was about to argue against this but then realised its worse than you suggest.
If I as a white person used the N word to describe a black person I would be labelled a racist, whereas a black person can say it all day long. But even if I black up and say it, its even worse. But then with gender the rules are almost reversed, I can declare myself a woman and expect that to be somewhat respected.
And on the internet no one knows you're a dog, or a transvestite in black face.
All while "learn to code" is used to harass in some contexts...
But we expect twitter folks to just figure out an algorithm to filter out "hateful" posts, when there isn't even an accepted definition of hateful? The first replies it would filter is all the people telling Trump how bad and evil he and his policies are, while the people who try to actually harass people will find quick and easy ways to game the system, as they always have; that's my prediction of a 'best' case outcome.
I don't use the like feature on the website at all and often comment on artwork saying how nice it is or whatever.
Not always, but often.
There's a pretty long list of CSS classes you can just toss a "display: none" on, but unfortunately other stuff can only be discerned by checking that certain elements are inside a given container. I had to start writing actual JS to evaluate the contents of the page and omit/delete stuff that way.
The first bucket has a vastly different Twitter experience. As an API client user I have no ads, no polls, no recommendations or friends likes, no "someone you follow replied" experience. Just a timeline of who I choose to follow, the blissful way it always was. No wonder they wanted to shut the API down.
(Apropos of nothing, the first bucket contains all the tech journalists.)
Obviously it's a tradeoff, but I found the downsides of the official experience to be less frustrating than the downsides of the third-party experience.
It’s a more general case of advertising pollution. Just as it benefits advertisers to make viewers uncomfortable and manipulate their attention, Reddit and Twitter (and Facebook!) systematically display messages that make users uncomfortable to get their attention, stimulate emotional vulnerability, and create opportunities for marketers to step in with a palliative, “shopping therapy”.
Of course, most people don't notice this. I've never had a post with more than 100 replies for example, so I would have never been aware of this.
And on top of all that, Twitter's own editorial team regularly stokes political/cultural controversy by boosting non-issues in Twitter Moments and Trending topics.
If you make a "viral tweet", don't read the replies. You have the tools to do so, since Twitter allows you to mute a thread.
If that's what usually happens, but sometimes randomly they get tons and tons of replies that they don't want (as claimed by this post), that's an interesting and noteworthy flaw that I've never seen specifically discussed.
I'm usually the last to defend Twitter but this title is pure clickbait.