At least one missing element is that of reputation. I don't think it should work exactly like it does in the real world, but the absence of it seems to always lead to major problems.
The cost of being a jerk online is too low - it's almost entirely free of any consequences.
Put another way, not everyone deserves a megaphone. Not everyone deserves to chime in on any conversation they want. The promise of online discussion is that everyone should have the potential to rise to that, but just granting them that privilege from the outset and hardly ever revoking it doesn't work.
Rather than having an overt moderation system, I'd much rather see where the reach/visibility/weight of your messages is driven by things like your time in the given community, your track record of insightful, levelheaded conversation, etc.
I agree with the basic idea that we want reputation, but the classic concept of reputation as a single number in the range (-inf, inf) is useless for solving real-world problems the way we solve them in the real world.
Why? Because my reputation in meatspace is precisely 0 with 99.9% of the world's population. They haven't heard of me, and they haven't heard of anyone who has heard of me. Meanwhile, my reputation with my selected set of friends and relatives is fairly high, and undoubtedly my reputation with some small set of people who are my enemies is fairly low. And this is all good, because no human being can operate in a world where everyone has an opinion about them all the time.
Global reputation is bad, and giving anyone a megaphone so they can chime into any conversation they want is bad, full stop. Megaphone-usage should not be a democratic thing where a simple majority either affirms or denies your ability to suddenly make everyone else listen to you. People have always been able to speak to their tribes/affinity groups/whatever you want to call them without speaking to the entire state/country/world, and if we want to build systems that will be resilient then we need to mimic that instead of pretending that reputation is a zero-sum global game.
Social reputation IRL also has transitive properties - vouching from other high-rep people, or group affiliations. Primitive forms of social-graph connectedness have been exposed in social networks but it doesn't seem like they've seen much investment in the past decade.
So there's 8 billion people in the world approximately. Therefore you're saying that you have reputation (either good or bad) with 8e9 x 0.001 of them. That is 8 million people? Wow, you maintain a very large reputation, actually. I hope it's not bad reputation!
But all jokes aside, you're exactly right. Reputation is only relevant in your closer circles. Having a global reputation score would just be another form of the Twitter blue check mark, but worse.
I want to hear from people that I don't know, the average Joe, not from Hollywood, Fox News persons, or politicians. And I want to hear from reputable people in my circles, but not so much that I'm just hearing from the echo chamber.
Social media is just a really big game that people are playing, where high scores are related to number of followers. So dumb.
> The cost of being a jerk online is too low - it's almost entirely free of any consequences.
Couldn't agree more here.
Going back to the "US Postal service allows spam" comment made by Yishan, well yes, the US postal service will deliver mail that someone has PAID to have delivered, they've also paid to have it printed. There's not a zero cost here and most businesses will not send physical spam if there weren't at least some return on investment.
One big problem not even touched by Yishan is vote manipulation, or to put it in your terms, artificially boosted reputation. I consider those to be problems with the platform. Unfortunately, I haven't yet seen a platform that can solve the problem of "you, as an individual, have ONE voice". It's too easy for users to make multiple accounts, get banned, create a new one, etc.
At the same time, nobody who's creating a platform for users will want to make it HARDER for users to sign up. Recently Blizzard tried to address this (in spirit) by forcing users to use a phone number and not allowing "burner numbers" (foolishly determined by "if your phone number is pre-paid"). It completely backfired for being too exclusionary. I personally hate the idea of Blizzard knowing and storing my phone number. However, the idea that it should be more and more difficult or costly for toxic users to participate in the platform after they've been banned is not, on its own, a silly idea.
There is no such thing as "Reputation", or rather - Reputation isn't one dimensional, and it's definitely not global. There will naturally emerge trusted bastions, but which bastions are trusted is very much an individual choice.
Platforms that operate on, and are valuable for, their view of reputation are valid and valuable. At the same time, that's clearly a form of editorial control. Their reputation system saying "Trust us" becomes an editorial statement. They can and should phrase their trust as an opinion.
On the other hand, an awful lot of platforms want to pretend they are the public square while maintaining tight editorial control. Viewing Twitter as anything other than a popular opinion section is foolish, yet there's a lot of fools out there.
Maybe? Reputation systems can devolve into rewarding groupthink. It's a classic "you get what you measure" conundrum, where once it becomes clear that an opinion / phrase / meme is popular, it's easy to farm reputation by repeating it.
I like your comment about "track record of insightful, levelheaded conversation", but that introduces another abstraction. Who measures insight or levelheadedness, and how to avoid that being gamed?
I general I agree that reputation is an interesting and potentially important signal, I'm just not sure I've ever seen an implementation that doesn't cause a lot of the problems it's trying to solve. Any good examples?
Yeah, definitely potential for problems and downsides. And I don't know of any implementations that have gotten it right. And to some degree, I imagine all such systems (online or not) can be gamed, so it's also important for the designers of such a system to not try to solve every problem either.
And maybe you do have some form of moderation, but not in the sense of moderation of your agreement/disagreement with ideas but moderation of behavior - like a debate moderator - based on the rules of the community. Your participation in a community would involve reading, posting as desired once you've been in a community for a certain amount of time, taking a turn at evaluating N comments that have been flagged, and taking a turn at evaluating disputes about evaluations, with the latter 2 being spread around so as to not take up a lot of time (though, having those duties could also reinforce your investment in a community). The reach/visibility of your posts would be driven off your reputation in that community, though people reading could also control how much they see too - maybe I only care about hearing from more established leaders while you are more open to hearing from newer / lower reputation voices too. An endorsement from someone with a higher reputation counts more than an endorsement from someone who just recently joined, though not so huge of a difference that it's impossible for new ideas to break through.
As far as who measures, it's your peers - the other members of the community, although there needs to be a ripple effect of some sort - if you endorse bad behavior, then that negatively effects your reputation. If someone does a good job of articulating a point, but you ding them simply because you disagree with that point, then someone else can ding you. If you consistently participate in the community duties well, it helps your reputation.
The above is of course super hand-wavy and incomplete, but something along those lines has IMO a good shot of at least being a better alternative to some of what we have today and, who knows, could be quite good.
> Your participation in a community would involve reading, posting as desired once you've been in a community for a certain amount of time, taking a turn at evaluating N comments that have been flagged, and taking a turn at evaluating disputes about evaluations, with the latter 2 being spread around so as to not take up a lot of time (though, having those duties could also reinforce your investment in a community).
This is an interesting idea, and I'm not sure it even needs to be that rigorous. Active evaluations are almost a chore that will invite self-selection bias. Maybe we use sentiment analysis/etc to passively evaluate how people present and react to posts?
It'll be imperfect in any small sample, but across a larger body of content, it should be possible to derive metrics like "how often does this person compliment a comment that they also disagree with" or "relative to other people, how often do this person's posts generate angry replies", or even "how often does this person end up going back and forth with one other person in an increasingly angry/insulting style"?
It still feels game-able, but maybe that's not bad? Like, I am going to get such a great bogus reputation by writing respectful, substantive replies and disregarding bait like ad hominems! That kind of gaming is maybe a good thing.
One fun thing is this could be implemented over the top of existing communities like Reddit. Train the models, maintain a reputation score externally, offer an API to retrieve, let clients/extensions decide if/how to re-order or filter content.
This is pure hypothetical, but I bet Reddit could derive an internal reputational number that is a combination of both karma (free and potentially farmable) and awards (that people actually pay for or that are scarce and shows what they value) that would be a better signal to noise ratio than just karma alone.
It could work like Google search where link farms have less weight than higher quality pages. Yes, you can still game it yet it will be harder to do so.
This is a good idea, except that it assumes _reputation_ has some directional value upon which everyone agrees.
For example, suppose a very famous TV star joins Twitter and amasses a huge following due to his real-world popularity independent of Twitter. (Whoever you have in mind at this point, you are likely wrong.) His differentiator is he's a total jerk all the time, in person, on TV, etc. He is popular because he treats everyone around him like garbage. People love to watch him do it, love the thrill of watching accomplished people debase themselves in attempts to stay in his good graces. He has a reputation for being a popular jerk, but people obviously like to hear what he has to say.
Everyone would expect his followers to see his posts, and in fact it is reasonable to expect those posts to be more prominent than those of lesser-famous people. Now imagine that famous TV star stays in character on the platform and so is also total jerk there: spewing hate, abuse, etc.
Do you censor this person or not? Remember that you make more money when you can keep famous people on the site creating more engagement.
The things that make for a good online community are not necessarily congruent with those that drive reputation in real life. Twitter is in the unfortunate position of bridging the two.
I posted some additional ideas in a reply to another comment that I think addresses some of your points, but actually I think you bring up a good point of another thing that is broken with both offline and online communities: reputation is transferrable across communities far more than it should be.
You see this anytime e.g. a high profile athlete "weighs in" on complicated geopolitical matters, when in reality their opinion on that matter should count next to nothing in most cases, unless in addition to being a great athlete they have also established a track record (reputation) of being expert or insightful in international affairs.
A free-for-all community like Twitter could continue to exist, where there are basically no waiting periods before posting and your reputation from other areas counts a lot. But then other communities could set their own standards that say you can't post for N days and that your incoming reputation factor is 0.001 or something like that.
So the person could stay in character but they couldn't post for awhile, and even when they did, their posts would initially have very low visibility because their reputation in this new community would be abysmally low. Only by really engaging in the community over time would their reputation rise to the point of their posts having much visibility, and even if they were playing the long game and faking being good for a long time and then decided to go rogue, their reputation would drop quickly so that the damage they could do would be pretty limited in that one community, while also potentially harming their overall reputation in other communities too.
As noted in the other post, there is lots of vagueness here because it's just thinking out loud, but I believe the concepts are worth exploring.
> You see this anytime e.g. a high profile athlete "weighs in" on complicated geopolitical matters, when in reality their opinion on that matter should count next to nothing in most cases, unless in addition to being a great athlete they have also established a track record (reputation) of being expert or insightful in international affairs.
I apologize for multiple replies; I'm not stalking you. It's just an area I'm interested in and you're hitting on many ideas I've kicked around over the years.
I once got paid to write a white paper on a domain-based reputational system (long story!), based on just this comment. I think it requires either a formal taxonomy, where your hypothetical athlete might have a high reputation for sports and low one for international affairs, or a post-hoc cluster-based system identifies the semantic distance from one's areas of high reputation.
And reputation itself can be multi-dimensional. Behavior, like we've talked about elsewhere, is an important one. But there's also knowledge. Can the system model the difference betwwen a knowledgeable jerk (reputation 5/10) and a hapless but polite and constructive novice (reputation 5/10)?
So if your athlete posts about workouts, they may have a high knowledge reputation. And if they post about the design of stadiums, it's relatively closer to their area of high knowledge reputation than international affairs would be. And so on. And independently of their domain knowledge, they have a behavior reputation that follows them to new domains.
These are such great questions, and worth exploring IMO. My thinking is biased towards online communities (like old Usenet groups or mailing lists) and not towards giant free-for-alls like Twitter, so I think I have a lot of blind spots, but this is something I've thought about a lot too, so it's great to hear peoples' ideas, thank you.
> I think it requires either a formal taxonomy [...] or a post-hoc cluster-based system identifies the semantic distance
Yeah, I wonder if some existing subject-x-is-related-to-subject-y mapping could be used, at least as a hint to the system, e.g. all the cross-reference info from an encyclopedia. When communities become large enough, you might also be able to tease out a little bit of additional info from how many people in group X also participate in group Y maybe.
As an experiment, I'd also be curious to see how annoying it'd be to have your reputation not transfer across communities at all, but instead you build reputation via whatever that community defines as good behavior and having existing community members vouch for you (which, if you turn out to be a bad apple relatively soon after joining, then their endorsement ends up weakening their reputation to some degree a little too). There are some aspects to how it works in real life that are worth bringing over into this, I think.
> system model the difference betwwen a knowledgeable jerk (reputation 5/10) and a hapless but polite and constructive novice (reputation 5/10)?
I touched on this in a sibling comment somewhere though I've long lost track of the threads, but I think the platform would want to rely on human input to some degree - part of being a good community member is doing things like periodically reviewing a batch of posts that got flagged by other users. In one community, there might be an 'anything goes' mentality, where another may set stricter standards around what's considered normal discussion, and so I think it'd be hard for a machine to differentiate but relatively easy for an established community member (again though, it always has at least a micro impact on your reputation, so how you carry out your reviewing duties can also increase or decrease your reputation in that community).
Odds are too that, if you occasionally have to put on the hat of evaluating others' behavior, it might help you pause next time you're tempted to fly off the handle and post a rant.
Anyway, the focus of the technology would be less about automatically policing behavior and more about making it easier for communities to to call out good and bad behavior without much effort, and then having that adjust a person's reputation - often very tiny adjustments that over time accumulate to establish a good reputation.
> independently of their domain knowledge, they have a behavior reputation that follows them to new domains
These are good ideas that might help manage an online community! On the other hand, they would be bad for business! When a high-profile athlete weighs in on a complicated geopolitical matter and then (say) gets the continent wrong, that will generate tons of engagement (money) for the platform. Plus there's no harm done. A platform probably wants that kind of content.
And the whole reason the platform wants the athlete to post in the first place is because the platform wants that person's real-world reputation to transfer over. I believe it is a property of people that they are prone to more heavily weigh an opinion from a well-known/well-liked/rich person, even if there is no real reason for that person to have a smart opinion on a given topic. This likely is not something that can be "fixed" by online community governance.
I agree this solution wouldn't scale to all platforms; those driven by maximizing views and engagement would find it counter-productive, as you say.
But that's fine. There are enough of those platforms already, and they have all the flaws we're talking about. We don't need to fix the existing ones so much as get to a world where better (by these standards) platforms exist and can compete on quality.
Twitter/Reddit/etc kind of try to do that, but it's hard to get right and it's always an afterthought, like Yoshin mentioned.
But maybe there's room in the market for something with higher signal to noise, that's reputation based. And reputation doesn't have to be super cerebral and dry and stodgy; reputation can be about sense of humor or whatever is appropriate for the sub-communities that evolve on the platform.
> This is a good idea, except that it assumes _reputation_ has some directional value upon which everyone agrees.
Reputation is inherently subjective. I think any technological solution that treats reputation as an objective value that's the same for everyone won't work if applied to any kind of diverse community, but I don't see any problem with a technical solution that computes a high reputation score for that TV star among his fans, and a low score among people who aren't fans.
(It's also sometimes worthwhile to consider negative reputation, which behaves differently than positive reputation in some ways. Not all communities should have reputation systems that have a concept of negative reputation, but in some kinds of communities it might be necessary.)
The problem is that "jerk" is relative and very sensitive people will tilt the scale. Also, occasionally jerks will have interesting insights that you would miss if you blocked the jerks. It's a problem of whether your platform cares more about having diverse viewpoints or about people being polite to each other.
> Twitter Verification is only verifying that the account is "authentic, notable, and active".
Musk has been very clear that it will be open to anyone who pays the (increased) cost for Twitter’s Blue (which also will get other new features), and thus no longer be tier to “notable” or “active”.
> At least I have not heard about any changes other then the price change from free to $8.
Its not a price change from free to $8 for Twitter Verification. It is a discontinuation of Twitter Verification as a separate thing, but moving the (revised) process and resulting checkmark to be an open-to-anyone-who-pays component of Blue, which increases in cost to $8/mo (currently $4.99/mo).
Public figures still have separate labels, I'm sure you'll have to have a credit card that matches your name on file (even if you choose to publicly remain anonymous). This is a much faster way to verify someone's identity than having people submit a photo ID and articles about them or written by them... Which was the previous requirement.
I do wish that public figures would have a different badge color or a fully filled in badge with other verified users having only a bordered badge etc. Frankly we don't really know what it will look like.
But the how would you address "reputable" people spreading idiotic things or fake news? How would you prevent Joe Rogan spreading COVID conspiracy theories? Or Kanye's antisemitic comments? Or a celebrity hyping up some NFT for a quick cash grab? Or Elon Musk falling for some fake news and spreading it?
Why does anyone think this is a solvable problem? Once there is sufficient notability only authoritarian censorship or jailing is going to work, to increasing degrees of force up to executions (and even then that won't silence supporters or there was any sort of group association). For people of lessor reputations, "canceling" might work, in the sense of public shaming or loss of income, but there is a ladder of force that is required.
If we want any sort of free society I don't think we _can_ stop fake news. We can only attempt to counter it with reason, debunking, and competitive rhetoric. We maybe can build tools that attempt to amplify debunking, build institutions trusted as neutral arbiters or that have other forms of credibility.
This is lumping together multiple problems, but IMO a platform that tries to police wrongthink from the top down is guaranteed to fail.
For my part, I don't think anyone anywhere should be prevented from saying the dumbest things that pop into their heads; what I disagree with is giving everyone a global megaphone and then artificially removing the consequences for saying the dumbest things that pop into their heads. :)
If you have a reputation that is tied to the community in which you are participating, and your reputation affects the reach/visibility of your messages in that community, then as you behave at odds with the standards of that community, your reputation goes down, thus limiting your reach. Exactly how to implement that well is the billion dollar question, but at the heart of it all is a simple feedback loop.
> your reputation goes down, thus limiting your reach
That's a nice concept, but it is unclear how to implement this. If you have a low reputation, you can only reach X users, but if you reputation improves, you can reach 2X users? How do you set these thresholds? How do you pick the X unlucky users to receive the low-reputation tweets?
Almost all users likely just need automated moderation through reputation (like my HN karma, or StackOverflow reputation, where a certain score is required for certain actions) for most their tweets.
To combat viral fake news, the top 0.001% of users could have manual reviews, as should viral tweets that are about to explode into reaching hundreds of millions of users. Having a manual moderation queue for any tweet that has grown from 0 to 100M rewtweets in a certain time frame, as well as for everything tweeted by users with X million followers, would be quite a low cost.
We have alredy seen some of this where Twitter has manually labeled information as being false, from e.g. Donald Trump. I doubt they'd give that treatment to any of my claims to 20 followers, and nor would they have to. Because what I write simply doesn't have the same importance when I have 20 followers.
Eventually in the end, you do need the ivory tower with the ministry of truth saying exactly what is and isn't acceptable. Should Rogan's Covid conspiracy tweet be blocked? Given a warning label? Or allowed to spread? That would have to depend on the level of harm it can cause. There can't be a "marketplace of ideas" where everyone is forced to make their own judgement and where all content is acceptable. Not just because US conservatives will find it hard to tweet, or because advertisers won't advertise, but because it'll have terrible signal to noise ratio for anyone, causing it to be pretty deserted.
The Internet needs to have a verified public identity / reputation system especially with deep fakes becoming more and more pervasive/easier to create. Trolls can troll all they want but if they want to be serious with their words then back it up with his or her online verified public Internet /reputation ID.
If this is one of Musk's goals with Twitter he didn't overpay. The Internet definitely needs such a system..has for awhile now!
He might connect Twitter into the crypto ecosystem and that along with a verified public Internet / Reputation ID system i think could be powerful.
It's worth noting that Twitter gets a lot of flak for permanently banning people, but that those people were all there under their real names. Regardless of your opinion on the bans, verifying that they were indeed banning e.g. Steve Bannon would not have helped the decision making process around his ban any easier.
Shouldn't ban anyone as both sides play politics which is filled with tons of untruths and lies on both sides. Which you have a point even a Steve Bannon or AOC will make up things and lie. Maybe the reputation system avoids political speech or there is a huge bar because politics is always a shit-show. Overall what you bring up here is a huge problem that maybe can be solved somehow. For deepfakes i still feel strongly a system needs to be in place ..one i mentioned or something close to it.
How does this system work worldwide across multiple governments, is resistant to identity theft, and prevents things like dictatorships from knowing exactly who you are?
The cost of being a jerk online is too low - it's almost entirely free of any consequences.
Put another way, not everyone deserves a megaphone. Not everyone deserves to chime in on any conversation they want. The promise of online discussion is that everyone should have the potential to rise to that, but just granting them that privilege from the outset and hardly ever revoking it doesn't work.
Rather than having an overt moderation system, I'd much rather see where the reach/visibility/weight of your messages is driven by things like your time in the given community, your track record of insightful, levelheaded conversation, etc.