Likewise, I know people who've had a post or comment removed by Facebook because it was erroneously interpreted as racist or sexist. Off the top of my head, I know a friend who was sharing information on ICE raids with friends who were immigrants and they got banned from Facebook for a couple of hours (he was unbanned after appealing). I know someone else who was sharing statistics on domestic violence that was interpreted as discriminatory. The corporations making these policies don't have a good enough understanding of political issues to determine whether a statement is hate speech - and hell, even people who are knowledgeable about politics and history disagree in a lot of areas.
For instance, for years, people recommended a high carb/low fat diet. Now that's being challenged. (I'm not weighing in on the argument, just stating that it is one.)
How would this debate take place if this type of information were blocked? For years (decades?) Atkins was treated as a fringe-food nut job. Now, his early research is being promoted by many others, and the tide may yet turn in his direction.  for example only.
How many other topics are out there that face the same issues of being labeled "fringe" before being commonly accepted?
 - https://journals.sagepub.com/doi/abs/10.1177/088307380934759...
But, who is we in this case? And what specifically are some things and how narrowly are they defined?
> The best we can do is go by consensus expert opinion.
Is that really the best we can do? How many times in history has consensus expert opinion been found later to be dangerous and wrong?
Should we assign that task to an AI or a customer service drone of unknown education and experience?
Or, should we assume that any reader of any particular "dangerous and wrong" things might be a better judge of whether those things are, in fact, "dangerous and wrong" as applied to their specific circumstance?
Non experts deciding for themselves what is right/wrong is a recipe for disaster. Peoples’ intuition is usually wrong without a lot of experience to back it up. It’s why we don’t let just anyone practice medicine or structural engineering. Expertise matters and the opinions of experts matter much more than a nonexpert’s opinion.
> We is you, me, and society at large.
But that is not who would be passing judgment in this brave new world of customer service reps and AI. Do we just ban everything mildly controversial?
> Expertise matters and the opinions of experts matter much more than a nonexpert’s opinion.
Are you a qualified, cited expert in this area that you are holding forth on?
> If a very large majority of the people who study an area of science agree on a conclusion in that area then it’s more likely they are right than someone who has no expertise in that area.
There is some room for intelligent debate in nearly any 'consensus' opinion. Some percentage of even experts nearly always disagree with the consensus, and consensus has often been proven wrong. If society went along with the expert scientific opinion concerning eugenics, for example, many of us might not even be alive today.
> Non experts deciding for themselves what is right/wrong is a recipe for disaster. Peoples’ intuition is usually wrong without a lot of experience to back it up.
Non-experts deciding for themselves what is right/wrong is exactly how the world has existed for thousands of years. You seem to be saying that the answer is to just shut down this debate if it occurs among the great unwashed.
> It’s why we don’t let just anyone practice medicine or structural engineering. Expertise matters and the opinions of experts matter much more than a nonexpert’s opinion.
And, yet, we do. In most free jurisdictions, you are free to practice medicine on yourself or do design your own structure or home.
I’ll restate my point in a different way. When government is deciding what types of scientific information peddling ought to be banned or regulated it’s best for our leaders to consult the experts of that area.
The important word here is “peddlers”. We regulate the sale of medical products. (And advertising related to such a sale.)
But we do not regulate who may join in the argument about (say) whether stress causes ulcers, or low-fat diets prevent heart attacks. The self-proclaimed experts have at various points in time been quite sure about these things. But thankfully their self-confidence did not result in a ban on people questioning the data.
"Suppression of ideas" is a grey area? I have no words.
Experts should be better at explaining the reasoning behind their opinions from first principles, but we should not trust them until they do so. Experts can make mistakes and have biases, often to new ideas.
Pre-internet nutjobs existed in all communities. Cranks and whatnot. This is nothing new. What is new is the scale at which such people can propagate their nonsense. The cost of convincing others your are right has drastically declined. The speed at which such stupidity can spread has greatly increased.
We have entered an era in which regulation of stupid, crackpot ideas may need to happen. If and when we do decide to crackdown on this it’s best to rely on expert opinion. This is of course just an opinion of mine.
I submit to you that the vast majority of what you believe is due to knowledge you gained form others and not from first principles as you put it.
Just today there was a thread on mental illness, and the crazy grab-bag of ideas which passes for expert consensus:
And if that's not crazy enough, look up what they believed 60 years ago. Should those have been locked in, by government force? Or should we be free to mock the shrinks for their delusions of understanding, if we wish?
I used be a fundamentalist, right wing Christian. Absolutely convinced that evolution was wrong. Eventually I was able to take the blinders off and ask myself, “Why is it that the overwhelming majority of people who study biology at the advanced level agree with evolution?”. It takes a great deal of arrogance to dismiss a conclusion that the overwhelming majority of the experts in a given area agree upon. Of course people get it wrong sometimes but we have to navigate life with imperfect information/knowledge. Who else do we rely upon? Keep in mind I’m not saying believe whatever an expert says. I’m saying that if the overwhelming majority of experts in a given area agree on something then that carries a tremendous amount of weight.
My wife is a psychiatrist. betulaq’s comment in the link you provided is one worth looking at.
Or worse, how do you know that the evolution side would win the battle to be selected as the official experts on this matter? We have these fights over school boards right now, and sometimes the biblical literalists have more votes. Who gets to decide the how the head-count of experts is to be conducted? I think it pays to imagine these weapons being used by our enemies.
I don't know the solution to the anti-vax madness, but I think censorship is a much bigger battle.
If 99% of oncologists think you have cancer then I hope you get treatment for cancer. And if 99% of them think option A is your best hope then I suggest you take their advice. You don’t have to. They may be wrong but in this world of uncertainty and imperfect information it’s the best option.
I'm sorry for using your actual problem as a jumping-off point, but this is a real issue:
There are people called TERFs, or Trans-Exclusionary Radical Feminists, who think "being trans" doesn't exist, that trans people are all mentally ill and/or (in the case of transwomen) men trying to invade female spaces to cause havoc, and who are organized enough to turn being trans into a huge political clusterfuck.
More than it already is, I mean.
My point is, the project of cleaning up "Bad Medical Information" runs into politics even quicker than most would imagine, and TERFs are very, very good at playing the "You're misogynistic!" card early, often, and loudly. How much courage would a tech company have in the face of that these days?
(They're also good at playing the "TERF is a slur! Cis is a slur!" card.)
Really? What else should we call self-professed radical feminists whose definition of a woman excludes trans women?
Trans exclusionary groups are pretty proud of the exclusion in and how it differentiates them from other feminists (or radfeminists for that matter) grouts. So TERF seems pretty apt to me.
Granted Im pretty naturally biased given they'd say I don't exist but the point stands.
In addition you have no real recourse if the big machine doesn’t like what you posted.
I few years ago I reported (to Matt Cuts) people using ppc selling snake oil pills to people with chronic organ failure by buying ppc on the medication keywords that people like me where using.
Snake oil salesmen are probably willing to pay good money for their ads so all incentives are for Google, Facebook and others to not stop these ads.
That's what your saying to people with vaccine injuries.
On the one hand, it seems noble/responsible to suppress anti-vax, Russian influence, conspiracy theories, etc.
On the other hand, for centuries it's been recognized that the best antidote to "bad speech" isn't censorship, it's more speech. Don't ban, convince.
But on the other other hand, that's been exclusively argued in the domain of government action, that government censorship is ultimately worse than what it purports to cure.
In this case, tech firms/platforms like Facebook, YouTube and Pinterest aren't the government or society, they're private actors just like newspapers and members of the free press in general. Just like it could be irresponsible of the NYT to publish letters to the editor supporting anti-vax, you can argue it's equally irresponsible for tech firms to allow the same on their platforms.
Yet on the other other other hand, we're reaching a point where a great deal of discourse is concentrated on a few user-content-driven sites, so censorship on them feels like it's inching closer in spirit to government censorship.
But on the other x 4 hand, mainstream public conversation has always been driven mainly by merely a handful of newspapers and then news programs with their own editorial agendas, so a handful of tech actors exercising their own "responsible" (as self-interpreted) curation and promotion doesn't seem to be anything new.
In the end, free speech has never been an absolute right (e.g. yelling fire in a crowded theater, libel, etc.) and it's ultimately a question of finding the right balance between harms.
For centuries there were fairly large barriers to widely disseminating your speech, and to lots of people having time or ability to pay attention. I think this may have limited the amount of "bad speech" that the truth had to counter.
It's now a lot easier to disseminate speech, and people have more time and ability to consume speech.
I probably encounter more opinions in a day now on any given topic, from people claiming to have above average or expert knowledge on them, than I did in a month a mere 40 years ago.
This may make what worked centuries ago (or even a few decades ago) ineffective today.
E.g. detect conspiracy theory speech and automatically provide relevant links/warnings to relevant curated sources?
The problem with that here is that the medical industry is actually wrong. They want to censor speech because they are the bad science, and people are figuring it out. If they were right, then yes, "more speech" would be a great strategy.
It failed miserably. The incentives were favorable to spammers but not to actual professionals. No real expert would invest their time into the platform, putting their reputation on the line, for a few paltry ad dollars. But there were some people who mastered cranking out low quality content.
I'm not making any claims about the new effort, just pointing out that identifying a problem and putting resources behind a solution is often not enough.
Edit: a good example to me was the start of the birther movement, which is a quinessential modern conspiracy theory to me, and I remember that as talk radio and Fox News.
The spread was by "Conspiracy Theorists" types look for self confirmation first to others of the type. It isn't like they could do anything to say get them to support say sensible infastructure upgrades because it wouldn't flatter their egos.
YouTube rewind got critically panned and any overt efforts for youtube tend to be poorly received.
Fundamentally the ad companies (FGA) have a problem - their entire value proposition is that the little space at the side of the page and before the content is very valuable and has an influence on people's thoughts and decisions.
So if they then try to claim that the content itself, or the list of recommended "watch next" videos on Youtube, isn't influential, it fails the sniff test.
(Apple are mostly clear on this front. Netflix know the "watch next" is critical to their success, but they don't let people fill it with unvetted garbage.)
We created tech to usher in an era of new mediums. But we never stopped to ask ourselves what should we keep from the "old" systems. In any system that gets recreated/refactored, there is always a duplication of efforts. And that's what we're seeing right now. We're slowly realizing that editorialization is desperately needed.
In all these social graphs that's been created, we assume all "nodes" are rational actors, or are likely to be rational actors and that the information if just let free would steadily sort itself out. But that's not what we're seeing. Old diseases are new again and entire republics have fallen victim to misinformation.
It's well past time to address these issues. I thought 2008 was bad. I could have never imagine 2016. And now I'm severely worried about what non-sense we'll see in 2020.
Tech companies, and the employees in these companies, need to continually ask themselves what they're empowering.
Netflix's deal with Goop is not encouraging, Goop just a "lifestyle brand" that is selling snake oil.
If you monetize on ads, you need damn expensive ads. If you want to have expensive ads, you gotta do something to actually make them work.
if sustainability vs valuation was a goal, then there might be more incentive to avoid these kinds of odd, consumer hostile behaviors. Users may not be the product, but, they're not the direct revenue source either. As long as ads can trick people into clicking, there is an incentive to charge for those ads.
If a person feels a consumer-facing company doesn't align with their values then they won't be a customer.
So should information be regulated if it can cause physical harm to oneself and others. This is not an easy question to answer.
Kind of like this comic: http://www.poorlydrawnlines.com/comic/knowledge/
We can handle quantity, now we need to handle error correction and build better access permissions too.
I can't sell vitamins that say "cures cancer" on the packaging so why should a youtuber sell ads while saying the exact same thing.
This is because Wikipedia policies have made it clear that all information must be derived directly from primary sources. Multiple editors are aligned with that policy and work with others to enforce it.
The general populace has not been trained to do that, so false information proliferates. Maybe that reasoning skill is what we should focus on teaching.
Which is to say that it's a false aphorism.