I'm ok with sites working on de-radicalization by making certain content more difficult to find, but I think we need transparency in all of this. I'm weary of a giant corporation like Google deciding what is "correct" discourse.
The analog I like to bring up is the College Board devising an "adversity" score along with your SAT score. I prefer the College Board come up with a transparent methodology instead of each institution, like Harvard, having an opaque one of their own. In the future I want to see a world where your zip code doesn't determine your outcome, but some changes like this happen at a glacial pace. In a similar light I also want to see discourse become more civil over time with the rebuilding of the middle class.
While I agree I think transparency is a hard problem. Just looking at the cat and mouse game that Google has been involved in with its search algorithms and SEO. And that has maybe 20% of the implications towards freedom of speech, politics, conspiracy, and controversy.
Credit scores have a similar rock, paper, scissors game between objectivity, transparency, and oversight.
I think some amount of opacity in the algorithm is going to be required but maybe a published report of actions taken at a 6-12 month lag? That might give opportunity for critical press and course correction in a broader sense without nitpicking every call.
I find it hard to hold Google and Facebook so accountable for addressing these problems personally when we struggle to hold our elected officials accountable for addressing them.
This comment breaks the site guidelines, which ask: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." Please don't turn the dial all the way to ideological flamewar 11, especially when a thread hasn't even started yet. The only outcome to that is yet another tedious feud. Those can readily be had elsewhere, and we're trying to have a site for something else here.
> we're trying to have a site for something else here.
I think with the state of things and the current culture war going on, then for the sake of the board this topic needs to be thrown down the hole. Hence my flag.
The very first posts on this thread and first replies are like battle lines drawn.
At the same time, a board for discussing tech has some responsibility to ensure that big issues facing tech can be discussed. It's a problem if whoever's making the next Youtube lurks here and thinks "huh, content moderation must not be a big issue, nobody in the industry seems to talk about it".
If we can't discuss it civilly, should it be discussed at all?
I think if we want this kind of discussion on the board, the topic should be clearly marked as potentially hostile and either have comment voting disabled entirely or change the policy so that rule violations in flagged topics come with strict and lengthy timeout periods.
And I say that as someone who has caught dang's ire a few times, as I'm sure he can confirm. :)
> If we can't discuss it civilly, should it be discussed at all?
I think we should.
And as you propose, sharper reactions towards those who still cannot behave.
That and a threshold (example at least 3 months old, at least 15 comments and the account must have good standing) for accounts that wish to participate and the moderators might have a chance.
"gaywonk" isn't why Crowder is homophobic, and at worst it's only a minor example of Crowder's promotion of hate speech.
Here's several clips of Crowder clearly acting homophobic, acting in general like an asshole, and probably violating YouTube's TOS that forbids posting content that "is deliberately posted in order to humiliate someone".
Following that, there are clips of Crowder using common homophobic slurs that had the direct result of inciting some of his fans to dox Maza and denial-of-sevice/spam his phone number (video t=115). This kind of incitement and promotion of a culture of hate is exactly what hate-speech laws are trying to reduce.
Even though this has already been flagged, I'm going to make a second attempt at a "substantive" comment:
The high road would be either A) free-speech-absolutism of publish anything that can legally be published, and take the heat for it, or B) run a family-channel-style platform of heavily curated content, and take full responsibility for all of it.
Since this is Google, and they can never have enough money, they take the weasel route of C) rake in that sweet sweet momo-chan and playdoh-princess cash until the media catches-on, then pivot to the next batch of lucrative slime.
They're trying to have their cake and eat it too. Most companies would probably do the same given the same circumstances. It's disappointing but predictable.
I'm not some crazy tinfoil-hat-wearing conspiracy theorist, but it's certainly telling that Wojcicki spoke at Recode (which is owned by Vox) but didn't give Crowder the same courtesy. This just screams conflict of interest (and will only give Crowder more ammunition), but whatever.
Second, it's clear that YouTube, much like Google, et al. are still attempting to keep their classification as a murky "grey area" where they aren't really platforms, but they aren't really publishers, so they can make tons of money, but not seriously answer to the public either. Classic corporatism. And until our representatives in DC really wake up, I can't really fault them for it. Our lawmakers barely understand how adding a friend on Facebook works, and I'm sure Silicon Valley enjoys printing money on the back of this ignorance.
Third, it's incredibly clear that there's an ideologically-liberal slant in tech. I've met both Muslims as well as Christians that were simply afraid of the status quo to speak out even though some of these policies made them uncomfortable. I really wish we could be more honest about this. As a staunch Millian[1], I contend that diversity of thought is a good thing.
Finally, (imo) anyone who seriously thinks Crowder should've been censored/deplatformed is just being unfair. He's pretty offensive, but doesn't really go much further than Colbert, John Oliver, etc.
> Second, it's clear that YouTube, much like Google, et al. are still attempting to keep their classification as a murky "grey area" where they aren't really platforms, but they aren't really publishers, so they can make tons of money, but not seriously answer to the public either
This distinction you and others are trying to draw between "publishers" and "platforms" is a red herring.
The Communications Decency Act is very explicit about why this is a red herring:
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
It does not matter how much curation, promotion, or removal of content Google, Facebook et al. engage in. They will never be held liable for the content their users upload, beyond existing criminal and civil liabilities such as those imposed by the DMCA. By virtue of being a provider of an interactive computer service, they will never be treated as publishers.
> He's pretty offensive, but doesn't really go much further than Colbert, John Oliver, etc.
I'm sorry, what?
> Last week, Maza called out the abuse in a supercut of comments Crowder has made on his popular YouTube channel in the last two years. They include: “lispy sprite,” “little queer,” “Mr. Gay Vox,” “Mr. Lispy queer from Vox,” “an angry little queer,” “gay Mexican,” and “gay Latino from Vox.”
Carlos Maza’s twitter and ig handle is “gaywonk”. He also speaks with a lisp. He also makes it a point to always point out that he’s “queer” and “latino”.
Calling him those things is not an insult. This is what his whole identity and claim to fame is.
> It's been a while since I watched John Oliver or Colbert, I guess I missed them using slurs on air?
Colbert literally called Donald Trump Putin's "c--k holster" on live TV (but was obviously bleeped). Samantha Bee called Ivanka Trump a "feckless c--t." Both of these are easily Google-able.
Again, I'm not a fan of this kind of language on either side, but let's at least be fair.
So, two late night hosts (one of which isn't John Oliver) have used obscenities directed at a person once each. And that's comparable to Crowder's output? He's very obviously been targeting that guy repeatedly, for a long time.
I would like to see evidence of that. Crowder's output towards that guy doesn't even seem comparable relative to the amount of other stuff he produces.
I gave you two examples (the ones most cited by conservative media), and I'm sure you can dig more up. But if that's your line of reasoning, you're headed towards the sorites paradox, and that's not a good place to be.
Do you really think you're being fair when you're framing one as a "one-off insult" and the other as "targetted harassment"? Not to mention that I watch John Oliver ever week and just about every episode (without fault) he makes fun of Trump or someone in his cohort. Is that also "repeated, targeted harassment"?
I don't think John Oliver ever incited his fans to dox and harass/denial-of-service anyone. He's gone after a few organizations and government agencies like the FCC that had already asked for comments from the public.
Perhaps more important is that John Oliver is inciting people into political action; political speech by the press - even hen the press tells jokes - is protected very strongly in the US because it's named in the 1st Amendment. Crowder doesn't seem to be making a politically statement and generally seems to have acted like an asshole that says homophobic slurs.
And now we're in full-blown sorites paradox mode, where the goalpost keeps changing. At first it was "repeated harassment" and now it's "incite to dox and harass." It's now okay for John Oliver to go after organizations -- even though he very clearly goes after the President (not to mention Gorsuch and Kavanaugh) on a fairly regular basis. As I hope you can see by this point, due to the originally-used ambiguous predicates, this is a fruitless line of argumentation.
By the way, even some cursory research would show that Crowder is extremely clear about his views on not harassing or doxxing, but it's clear that you haven't even done me (your opponent in this debate) the courtesy of some basic knowledge-gathering.
I do agree that Crowder is an asshole. I also think that assholes have a right to be assholes as long as their behavior doesn't violate Mill's Harm Principle.
> At first it was "repeated harassment" and now it's "incite to dox and harass."
It's always been an accusation about pattern of behavior, which includes multiple ("repeated") events that happened over the past (if I recall correctly) ~2 years. Some of these events involved Crowder using hate speech, which is not "calling people nasty names". Hate speech restrictions and laws are generally defined to target speech/writing/conduct/etc that incites violence or prejudicial behavior against a protected class. Calling some a well known slur is (probably) just being an asshole. When people talked about Crowder's hate speech, they aren't talking about him using a specific word; they're talking about how his rhetoric seems to be having the effect of inciting parts of his audience to attack Maza through his phone.
> It's now okay for John Oliver to go after organizations
Yes, of course it is. Attacking organizations (public or private) on matters of public interest is one of the primary goals of protecting the freedoms of speech and the press.
> he very clearly goes after the President
What specifically are you accusing John Oliver of doing with regards to the president? I doubt he say anything that would even theoretically qualify as hate speech, but if I'm mistaken, I would be interested in a link to any example of John Oliver inciting his audience to harass the President because of the President's race, sex, religion, or other protected class.
> ambiguous predicates
The only ambiguity I see is a lot of people misusing the term "hate speech".
> Crowder is extremely clear about his views on not harassing or doxxing,
I know; I do actually do my research about a topic before getting involved in an argument. Saying tht harassment and doxxing are bad is step 1. If he actually wanted it to stop, it would be a good idea for him to publicly condemn the those in his audience that were responsible for the harassment/etc, and work to avoid similarly inciting his audience in the future.
However, saying a few words against harassment and then continuing the behavior that he now knows could cause additional problems in his audience is at best irresponsible and reckless, and at worst makes his statements about harassment look suspiciously like a like or dog whistle.
> but it's clear that you haven't even done me (your opponent in this debate) the courtesy of some basic knowledge-gathering.
Warning: this kind of personal attack is against HN rules.
> I also think that assholes have a right to be asshole
I agree. Which is why the accusations against Crowder are bout the behavior that is significantly worse than simply being an asshole. Harassment and hate speech eventually lead to violence when ignored.
Canada has robust, tried and tested hate speech laws. They've also previously said they would start regulating platforms unless they started to "self regulate." It's time.
Back in 2007 Christopher Hitchens argued that Canada's hate speech laws are not only immoral but self-contradictory. The motion: "Be it resolved: Freedom of speech includes the freedom to hate." Hitchens was fully in favor of that statement.[1]
His argument has three parts.
First, freedom of speech isn't about the rights of the speaker. It's about the rights of people who want to listen to that person's ideas. Every time you silence someone, you are infringing upon the rights of that audience. The audience might not agree. Maybe they want to understand the speaker's ideas so they can refute them. But if you prevent the audience from hearing those views, they can't refute them, and the bad views continue to spread.
Second, censoring requires delegating a censor. But if you follow the first argument, you'll see that a censor decides for you what you are allowed to read or hear. I'm an adult and can decide that for myself, thank you very much.
Third, if you are going to censor hate speech, then you must censor every major religious text. The bible and the quran contain passages encouraging hatred of homosexuals, subjugation of women, and taking of slaves. By our standards, they are utter abominations and any impartial application of a hate speech law would see these books banned.
The oration by Hitchens is his best. He starts off by yelling "Fire!" to the audience, and explains just why the analogy of yelling fire in a crowded theater is a terrible one. (Mainly that it was coined in a SCOTUS ruling that jailed socialists in WWI for distributing pamphlets against US involvement in the war.)
Freedom of speech has always been qualified, and this argument does not change it. At best it succeeds in arguing for free speech restricted to certain platforms, and the hate speech laws still allow for free speech to private audiences.
I'll just refute the points you have summarized here since I have no interest in listening to the entire speech to just extract the argumentative points he makes.
The first argument doesn't make much sense. If I have terrible views but do not want to communicate them it is my right to be silent, but if this argument passes I am infringing on the rights of my potential listeners. If I choose to discuss my views to an exclusive membership, I am infringing on their rights as well.
Similarly, IP and copyright laws infringe on this right since they limit the audience of a work.
Second, just as most people we can agree that certain substances are harmful and should be made difficult to distribute, similarly most people can agree that some thoughts are harmful and should be made difficult to communicate. The question isn't a matter of whether to censor or not, but the degree and topics.
The final argument, similarly, does not pass for general spaces. Christians and Muslims do not think that the more icky parts of their texts should be read by children without appropriate direction. Neither would want a terrible passage displayed out in the open for their children to see. Some may even have an interest in restricting it, so as to prevent a critic appropriating their text to portray them in a bad light. Academics do not think that immoral should be read by children without proper direction.
> Freedom of speech has always been qualified, and this argument does not change it. At best it succeeds in arguing for free speech restricted to certain platforms, and the hate speech laws still allow for free speech to private audiences.
I'm talking against Canadian hate speech laws, not YouTube's policies. It would be nice if private companies erred on the side of free expression, but having the government force them to do so is a cure that is worse than the disease.
> The first argument doesn't make much sense. If I have terrible views but do not want to communicate them it is my right to be silent, but if this argument passes I am infringing on the rights of my potential listeners. If I choose to discuss my views to an exclusive membership, I am infringing on their rights as well.
You misunderstand me. I am saying there are people who want to read or listen to something, and censorship prevents them from doing so. Nobody is saying that you have a right to force people to listen your message or read it. That wouldn't generalize.
> Similarly, IP and copyright laws infringe on this right since they limit the audience of a work.
Copyright laws are about implementations, not concepts. The argument for free speech is about spreading ideas, not web rips of the latest Netflix special. Copyright laws have fair use exemptions to make sure they aren't used to inhibit people from spreading ideas. That's why I can use a clip of Pat Robertson in a video criticizing his policies.
> Second, just as most people we can agree that certain substances are harmful and should be made difficult to distribute, similarly most people can agree that some thoughts are harmful and should be made difficult to communicate. The question isn't a matter of whether to censor or not, but the degree and topics.
Considering the cost of prohibition and the war on drugs, this argument seems shaky for you. The cure is worse than the disease! People don't agree on this at all. If anything, it's the "legalize everything" tribe that has been gaining traction. So too it is for free speech. In fact in the US, censorship is forbidden for pretty much everything except leaking classified materials. The concept is called prior restraint.[1]
> Christians and Muslims do not think that the more icky parts of their texts should be read by children without appropriate direction.
This is not true. I personally know at least a dozen children who have the bible read to them uncensored by their parents. (In one case the father is a pastor.) They all believe that homosexuality is a sin and that women should be subservient to their husbands. They got these ideas from the bible, not from some modern day charlatan.
It's worse for the quran, which encourages people to become Hafiz (one who has memorized the quran).[2] Parents and schools encourage children to totally memorize the quran as it's easier to do when you're younger. Over 100,000 children a year memorize it. This includes the verses about valuing the word of a woman at half that of a man, verses ordering the smiting of the necks of disbelievers (if you wonder why so jihadists behead so often, it's because of that), and verses encouraging marital rape.
If one really believes these books are dictated by the creator of the cosmos, then every word of them is sacred and should be spread as far and as wide as possible. That is what these people do. That is where people like Abu Bakr al-Baghdadi and Ayman al-Zawahiri get their philosophies.
> . This includes the verses about valuing the word of a woman at half that of a man, verses ordering the smiting of the necks of disbelievers (if you wonder why so jihadists behead so often, it's because of that), and verses encouraging marital rape.
If you take them out of context, you can understand any text the way you want. Fortunately, these "controversial" verses weren't seen as such in Islamic history overall, and groups that deviated (the Kharijites for example) were intellectually obliterated, and were known not to be able to defend their incorrect views.
This is not limited to religious texts. Any person with an agenda will find whatever means to achieve said agenda. Twisting words not to mean what they do, making up lies, and so on. We see it all the time today, and throughout history.
Yes, I'm discussing the hate speech laws. You can still incite people to hatred in private conversations. Thinking about your reply leads me to agree that the hate speech laws indeed go too far, and I'll talk about that with regard to the third point. But I still stand by the claim that free speech should be restricted to certain platforms.
> I am saying there are people who want to read or listen to something, and censorship prevents them from doing so.
You misunderstand me. I'm saying that this notion that we have a right to read or listen is not unqualified. When you think of traditional government censorship, it seems obvious, but it does not stand the test when we consider other gatekeepers of information. We do not have a right to read or listen to news articles or scholarly papers if publishers so choose not to distribute them to us.
> The argument for free speech is about spreading ideas, not web rips of the latest Netflix special.
The hate speech laws do not prevent spreading ideas to a similar extent that copyright laws do not prevent spreading ideas. More narrowly, if your hate speech passes the tests for fair use it is likely that they will pass the test for not being hate speech, since it would be clear that those ideas are being framed in a "research, private study, education, parody, satire, criticism or review and news reporting" context.
> Considering the cost of prohibition and the war on drugs, this argument seems shaky for you.
Not really. We all agree that toxic and hazardous materials must have very regulated distribution, in the sense that where they are sold there must be huge warnings that they contain hazards. There are materials that are would be controversial to ban (like drugs), and there are ideas that would be controversial to restrict distribution, but there are also ideas that would not be controversial to restrict distribution (like advocating children to bully each other---distinct from considering philosophically the morality of such a policy).
> This is not true.
I will concede this point. Moderate religionists think this way, but the more extreme religionists do not. Your examples don't really work since those are still private conversations and private distribution, and so don't run afoul of the law.
But the point still stands that there is an interest in putting hate-inciting content in a public space. The main problem with hate-inciting content being in a public space is that it harasses a person who is not prepared for it, again, the "children's eyes" example stands that free speech should be restricted to certain platforms. To address that, I think it suffices that hate-inciting content must be put behind a confirmation that the content incites hatred and is not suitable for minors: it is still accessible to everyone, and the concern for harassment is gone.
As examples, fiction containing bigoted characters speaking hate need not possess them, since they do not exhort the reader or claim any truth, but insofar as religious texts exhort the reader, and discuss mistruths that hurt any group, they should contain such warnings.
I see problems with #1. First, I don't believe that having hate speech in the open is better because it can then be refuted. I think that's just BS..bigots aren't fueled by rational discussion, they're fueled by hate and ignorance.
Second, part of the problem here is that these platforms are a) open and b) can push this content, including to minors. I'd probably be willing to have Youtube and twitter thave _no_ censorship but in exchange the platforms should be be dumb directories and/or not publicly accessible.
Third, hate speech can lead to violence. I feel that this point needs to be balanced against someone's right to listen (is that really a right?).
I agree that in this case, it's probably more harassment than full on hate speech. But once you start spewing hate at large groups of people, more harm is done than good:
"In order to maintain a tolerant society, the society must be intolerant of intolerance."
> Second, part of the problem here is that these platforms are a) open and b) can push this content, including to minors. I'd probably be willing to have Youtube and twitter thave _no_ censorship but in exchange the platforms should be be dumb directories and/or not publicly accessible.
I'm talking about the Canadian hate speech laws, not YouTube censorship. Companies can refuse to host whatever they want.
> Third, hate speech can lead to violence. I feel that this point needs to be balanced against someone's right to listen (is that really a right?).
OK, so do you ban the bible or not? Religious texts are the biggest source of hatred of homosexuals, racism, misogyny, and antisemitism. If we don't ban them, then clearly the hate speech laws will only be used to silence unpopular views instead of harmful ones.
I strongly encourage you to listen to Hitchens's arguments, because he addresses every point you've brought up except for corporate censorship.
I knew before I clicked on the link that it would be a contrapoints video. I’ve seen it before and let’s just say I don’t find it compelling. It’s your typical youtuber taking clips out of context, misinterpreting them (possibly deliberately) and then spending as long as they want attacking the straw man they just set up.
The straw man is evident from the title: “Does the left hate free speech?” That is not what Hitchens was arguing, nor what I am arguing.
But the censorship which you see now is not the censorship simply of a whimsical censor king. It is the censorship of actors responding to perceived public pressure. This is a decentralised force of censorship bubbling up to central authorities of the land, who feel they must respond because the public still has enough power.
Now in China one might argue differently, that a censor king could routinely ignore the views of the public. But what's happening in the west are democracies opting into censorship. A lot of people like censorship, they just don't like the word.
I'm talking about the Canadian hate speech laws, not censorship on Facebook, Twitter, YouTube, etc. I'd like it if companies would take down less stuff. Put up warnings and stop recommending certain content, sure. But don't make it inaccessible to people who want to see it. That said, they're private companies so they can be as censorious as they like. I definitely wouldn't want to get the government involved and force them to keep content up. That's a recipe for disaster.
I don't know either. You might be right. (Does Crowder cross that line between harassing an individual and targeting groups?) Doesn't change that Canada should be applying its laws to these platforms.
It really depends. If someone just walked up to the street here in Toronto where I live, and, for instance, with no provocation, called me a faggot for holding hands with my girlfriend, that, in my opinion, is hate speech, and that individual certainly could face consequences.
If you, say, make a joke about someone being an 'edgefag', 4-chan style, I think that's fine, in context, and somethimg I did myself, years ago.
In Law, there's the 'Word' of the law and the 'Spirit', or Intention, of the Law, and ultimately, in a court of law, it's the intention, specifically with hate speech, that really determines if it's criminal in nature or not.
Full disclosure: I'm lesbian, polyamourous, and I am extremely happy that my partners and I can feel safe walking around downtown holding hands without people acting this way.
I have a 6 out of 10 on the Adverse Childhood Experiences test due to my mom and I still have a problem with the idea of putting her in jail over it. Especially for the verbal abuse part of it, which I'm sure you agree hurts worse than the rest of it.
Take it from me, things have the possibility of changing given time. Our parents are flawed people often not capable of doing any better than they did. No amount of punishment from the state can make up for that.
I have actually had a lot of financial and career damage due to these specific incidents, and in certain instances he even acknowledged on paper his behavior was negatively affecting my career and well being and chose to actively continue his hateful behavior.
My mother was physically, verbally, and emotionally abusive, but recently passed away.
While I believe people can change, and am having some level of difficulty with it, unfortunately he does need to be held responsible for the thousands of dollars of damage to my career, at the very least. My psychiatrist and I are working through it, but at this point unfortunately it looks like it will have to be the course of action. :(
Conservatives tend to "aversion" as their coping mechanism (at its extreme, this might be labelled "hate") whereas liberals tend towards "connection" as their coping mechanism (at its extreme this might be labelled "greed").
Interestingly in buddhism "greed, hate and delusion" are considered to be the 3 poisons. i.e. clinging to either greed or hate leads to delusion.
Isn't the fact that "hate speech" is a "thing" whereas "greed speech" is not evidence of a liberal bias?
Social platforms need to be held legally liable for the content they host. Unless you’re a “dumb pipe” doing one-to-one comms (email services could get an exemption for spam filters), you’re a publisher.
Hire moderators and the editors. If that doesn’t scale, then you need to cut back on the user-created content you host.
The content can be shared in other ways via the internet, but networks inside the network shouldn’t get a free ride anymore.
So how can any new social media platforms or startups ever be able to support all that additional staff/infrastructure?
I hold the complete opposite view. The individual that uploaded the content should be held solely responsible for the publishing of that content. Should the platform receives a request for removal then by all means the onus would fall to them to make that decision. If the content is illegal then the platform should aid law enforcement with whatever information they have on the uploader. I don't believe it makes any sense to pre-filter content unless you're deliberately trying to create an opinionated platform (which btw is fine if that's what you want to do, but it shouldn't be a legal requirement).
>So how can any new social media platforms or startups ever be able to support all that additional staff/infrastructure?
Why does it matter if they can? Do we allow unregulated oil drilling startups in the name of competition? There's no particular reason to assume that the platforms that govern our social interactions don't fall into a category of institutions who need to prove that they can keep a grip on their activity.
The quality and civility of public discourse is of principal importance to the health of liberal societies. Social media platforms are not irrelevant places of entertainment, it is becoming increasingly clear that they are the primary space where public consensus is formed. As such, they should be held accountable accordingly and not be treated like some random market provider of cat videos.
That is such an odd comparison to draw. Oil drilling is inherently destructive to the environment. Social media posts are not, although you do try to make that argument. The solution to oil drilling destroying the environment is less drilling. The solution to misinformation is _more_ information.
You seem to think that these platforms govern our interactions. When I am specifically saying that they should do as little governance as possible. They are merely the channels by which we communicate.
Edit: Actually I'm saying they should be allowed to do as little governance as possible. I think they should still have to option to filter however they like if that's their intention.
>The solution to misinformation is _more_ information.
Sorry, but how is this self-evident? Why is the solution to misinformation not indeed less information, from sources that hold authority and are accountable, well researched, slow enough to be absorbed and reasoned about, and so on?
When exactly was the human brain shaped for a miasma and a stream of low quality information dripping down the Facebook news feed? There is no evidence that this is the case.In fact there is evidence to the contrary, the people most engaged in political news and information are the most easily misled and biased (https://www.vox.com/2015/6/8/8740897/informed-voters-may-not...)
>You seem to think that these platforms govern our interactions.
That's not something that I think. Whoever owns our public discourse and our attention by definition governs our interaction, there is no vacuum of governance that companies like youtube can fall back on, it just means they're playing down their influence and responsibility.
Actually, I agree with you that as a species we are not cognitively equipped to handle an unrelenting torrent of information. I also agree that well researched and well presented (though not necessarily authoritative) arguments should be held to much higher credibility by the voting populace than random twitter posts. Those more credible arguments should be inherently more convincing for readers.
What I think actually needs to happen is to teach people the appropriate critical thinking and reasoning skills to make that distinction for themselves, rather than force the platforms to make that distinction on their users' behalf. Societal change is a very slow process that happens over generations. Every point in history is locally noisy but on a larger timescale that noise diminishes compared to the underlying trend. I don't think it's necessary to make changes based on the locally noisy timescale as I believe we will eventually reach equilibrium between the volume/quality of information in public discourse and the way the population digests it.
If they can’t exercise effective editorial control, they shouldn’t be in that business. Same as a newspaper that publishes harmful or libellous content. Will it kill off Facebook and Twitter in their current forms? Probably...but that’s a feature, not a bug.
I am saying they should be allowed to exercise _zero_ editorial control. Unless they specifically want to, like a newspaper. If you want to create liberal-Facebook or conservative-Facebook then go right ahead. At the moment Facebook is intended to be a neutral platform exercising as little editorial control as possible (though whether that is the reality is up for debate).
And btw, I couldn't care less for Facebook or Twitter, I don't use either personally.
this. I have a new social media startup project based on user-submitted content and I can tell you that there will be no way to compete if regulation is enacted that treats social media companies as publishers. It's hard enough just to build something that people may want to use in a market dominated by titans, but by adding the additional burden of moderation and liability for content, we'd be ensuring that the incumbents never have to worry about competition again.
I'm normally one to see both sides in a situation like this, but this one struck me differently.
YouTube appears scared of extremists. They claim to fear a anti-conservative bias. If by policing blatant bigotry and hate makes you anti-conservative, there may be an issue with conservatism, not your policy.
If mainstream conservatives have gotten to the point where they'd defend blatant bigotry, disrespect, and just down right hateful behavior, and we yield to them, we need a reality check. This type of behavior does not make our society any better, promote a discussion, or spread any kind of well-being.
The same behavior would not be tolerated in many of the community spaces we share: schools, churches, theaters, restaurants. Why do we allow such things online? Why do we allow women to be harassed endlessly with death rape threats on Twitter but not in person? I do not understand. To mistake poor cyber behavior as less important and less influential on our culture may be the gravest mistake we keep making.
There's a good faith, honest, and respectful way to have differing opinions and discuss important topics. But this isn't it, and it's not ambiguous. If we want a world where goodness prevails, we have to stop being scared and start making choices.
Unambiguous definitions do not exist outside of mathematics. Furthermore, modern fascist communication strategies rely on existing within this ambiguity.
If Google aka YouTube doesn't allow Conservatives to voice their opinions on the platform then there's the very real possibility that the president himself may get involved.
The analog I like to bring up is the College Board devising an "adversity" score along with your SAT score. I prefer the College Board come up with a transparent methodology instead of each institution, like Harvard, having an opaque one of their own. In the future I want to see a world where your zip code doesn't determine your outcome, but some changes like this happen at a glacial pace. In a similar light I also want to see discourse become more civil over time with the rebuilding of the middle class.