Covid stuff and fake news aside, I reported maybe 20 times obvious scam videos, or posts with extreme hatred and threats of violence against a group of people and similar stuff where 99.99999% of us would agree that it should be removed without question, and only once they did remove it. All the other times I get a message that they reviewed my complaint, that it does not go against their standards/tos/bla bla, and that I can block the video/post/channel for myself.
And taking into account how many random stuff they ban proactively, it just does not compute in my logic board in the brain.
One way to think about it is guilt vs shame. Guilt is where you feel bad because you've violated your own standards. Shame is where you feel bad because somebody of standing has called you out. Platforms generally feel shame but not guilt, so most of their actual improvements in anti-abuse and TOS enforcement come from PR messes and other things that trigger shame. But when the heat is off and it's just you reporting something, they're not going to be particularly bothered.
"Platforms," feel nothing.
But you are right in a sense, that is is a large enough group that it's momentum will crush any feelings of individuals in the team.
The only way to get a rational human back in the mix is through legislation or litigation. Given sufficient repetition, eventually those things are abstracted and encoded again, in a cycle seemingly designed to prefer nonhuman control systems.
The technical platform is still ultimately an extension of the organization, which is the entity that ultimately has human goals - mostly profit, and avoiding outcomes that threaten that profit.
What actually goes on is an incredibly complicated set of relationships between thousands of people. We don't have a useful specific vocabulary for that, so we have to resort to metaphor. But I believe the metaphor is a valid one, and it's grounded in the actual emotions of key people making these decisions.
Probably not worth debating this further, but I would contest that number.
I lean very much towards unrestricted flow of information (and I know, there are way more radical ones around here). Not total unrestricted, with excemptions like CP, but my default is also just not look at content I find disturbing. Which means I basically avoid most of those social platforms altogether, as I indeed find most the bs posted there disturbing.
Back when I had a Facebook account, I just started reporting all ads as sexually explicit. They always came back with "no it isn't" but it took a while. And I stopped seeing ads.
If you work for these companies and don't quit for simple ethical and moral reasons, YOU are part of the problem and aiding and abetting evil!
How sure is he of that, and why? As a comparison, nuclear power has been of great utility for many countries, but I sure would not want to see it "democratized".
"None of the kits we sell contain anything dangerous, nor is the average person experimenting with biology inherently dangerous. If you are trying to engineer something hazardous — like say a bat virus — you might have a problem, but the genome search space is large enough that accidentally creating a harmful organism is astronomically improbable. Access to most dangerous materials are also heavily restricted..."
This sounds really sketchy to me. I can't tell if he's lying or if he really can't see any further than his own research program.. sometimes a field of research is just actually existentially dangerous for life on earth, you can't handwave it away.
The possibility of biohacking as a hobby is thrilling. Trouble is, the dangers are so extreme they're difficult to even think about. Like, thought experiment: can you design a virus that kills all eukaryotic cells? Can you think of other possibilities equally terrifying? Of course stuff like that is far off, but neither you nor this guy know whether 'far off' means 10, 100, or 1000 years.
We already have examples of substances that are strictly harmful to almost all life.. Dispersal of them (via the democratization of access to Chemistry) is already placing stress on the biosphere.. which again, is such a large and horrible thing it's hard to imagine or reason about.
Another thing: it is still plausible that covid was a lab escape. I think the biohacker community should be incredibly humbled by that, and really think about how they'll be seen in decades to come.. we've seen cycles of technologists becoming disgraced because of the impact of their products; those cycles seem poised to accelerate. Maybe this is a good time to get on the right side of history?
The problem with individual biohacking is that there's not even oversight by colleagues. Maybe I'm overly optimistic, but I'd expect someone working near/with a biochemist who was doing crazy stuff to speak out about it. There's no real equivalent for this if the work is going on in somebody's basement or workshop.
0% never happens. .0bar1 can happen on the next pull of the lever.
Never, ever, ever underestimate the danger of small odds and just one...more...pull.
It's laughably/terrifyingly easy, the hardest part is building or buying the bombs.
But you don't see it done regularly at all.
Granted, a single nuclear or bio attack would affect way more people.
All the lead paint and tobacco company executives retired to their yachts, scot free. The fats hydrogenation people responsible for millions of heart failures are walking around loose to this day. Nobody currently responsible for slow, painful deaths via liver damage from sugar poisoning, in the hundreds of thousands each year, is looking over his shoulder.
But people still talk about the Zodiac Killer (who I have no factual reason to believe is Ted Cruz).
Nuclear power is already "democratized" by design. The basic framework of the non-proliferation treaty is that signatories get to use nuclear power (and receive international assistance developing nuclear power) in exchange for a commitment not to develop nuclear weapons.
my main point, though, is that nuclear energy is a fairly strange, highly-regulated-at-all-levels beast with its own long complicated history so it's not a particularly straightforward parallel to a hypothetical weapon-of-mass-destruction-enabling genetic engineering
Just because people gatekeep knowledge, that doesn't automatically mean only good people will get access to it. It actually means that if bad people infiltrate the gatekeeping organization then they can actually prevent good people from participating
The first one is okay, as long as we're not talking about experiments that may be dangerous to the public. (which he seems to categorically reject for some reason, while home made bioterrorism is a real threat), the second one just doesn't follow at all.
Youtube has no realistic way to tell whether someone producing home-made science on youtube is a phd ex-nasa biohacker who follows best practices or just a complete quack who tries to sell dangerous fake remedies to vulnerable people. In practice the incentive to promote the latter probably far outweighs the former in number given the huge, generic audience on Youtube.
77% of the Youtube audience in the US are 17-25 years old, it's not some niche forum for engineers and the notion that they can read scientific papers and weed out legitimate science created at home from misinformation is absurd. The correct platform for something like this is a separate forum or community where enthusiasts meet with some barrier to entry, not the mass media.
I immediately question the motive of someone who promotes individual science or 'hacking' and seeks the largest mass media audience. I think the motive is much more straight forward. The author has a company that sells genetic engineering kits to people and by banning him from Youtube that impacts him financially. I personally think Biohacking and eccentric science is cool, but the appropriate audience for the kind of things he did, like treat himself with CRISPR or replace his entire microbiome to treat IBS is probably a community more self-selected than HN, not Youtube, which is mainstream television for young adults.
> 77% of the Youtube audience in the US are 17-25 years old, it's not some niche forum for engineers and the notion that they can read scientific papers and weed out legitimate science created at home from misinformation is absurd.
Only PhDs or engineers are allowed to be interested? You think only people with CS degrees should learn about programming/hacking? Only people with electric engineering degrees should learn about circuits?
Also 17-25 is mostly adults. Also, wouldn't the people who watch his videos be self-selecting? I'm guessing 99% of the demographic you mentioned wouldn't be searching for his video.
> The author has a company that sells genetic engineering kits to people and by banning him from Youtube that impacts him financially.
Probably. But so what?
> I personally think Biohacking and eccentric science is cool
But people in the 17-25 year old demographics are not allowed to find it cool and enjoy it?
Your argument is that it could be harmful and therefore everyone 17-25 should be barred from it. That's the argument I heard in the 90s to prevent young people from learning to code/hack/etc. I don't understand how the culture of censorship and patronizing paternalism snuck into the "hacking"/tech environment.
You're in shock of the attitude that hackers should stay off mainstream commercial platforms and avoid pandering to a generic audience? You do realize why we're having this conversation on HN and not on Facebook right?
The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is. That's how it always works when people actually care, it's a good filter.
I'm not paternalistic, I'm opposed to grifters and attention-seeking. There has always been a barrier to entry in hacker culture because getting past it signals that you have some commitment and degree of willingness to learn, which is particularly important when we're talking about gene-editing yourself
I'm in shock because you believe 17-25 year olds should be barred from knowledge.
> You do realize why we're having this conversation on HN and not on Facebook right?
You seem to have a sense of superiority. Is HN better than facebook? I'm not so sure. You aren't better than anyone because you post here. I'm certainly not better than anyone because I post here. Certainly wasn't given any test or provided any credentials to post here.
> The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is.
This argument is even more shocking. What about the percentage of people who might have become interested if they were exposed to it? Do you know how many people developed interest in something because they were introduced to it? There are millions of minorities/women/men who didn't go into computer science because they simply weren't introduced to it.
> That's how it always works when people actually care, it's a good filter.
Because the hackers ethos is to make knowledge as difficult and impossible to reach?
> I'm not paternalistic, I'm opposed to grifters and attention-seeking.
Why didn't you say so. Why did you write so much about "17-25 year olds", PhDs/engineers, niche forums, etc? Grifters and attention-seeking isn't my cup of tea either. But they don't deserved to be banned. Also is grifting and attention seeking to under-17s and over 25s okay?
Your logic and thinking is what paternalists in the past did to women and minorities. It's why people prevented women and minorities from reading, writing and gaining knowledge because it would be too harmful to them. Nothing more anti-hacker than that.
Oh, maybe you're right if you mean 2% of 2% for those who will manage to find his videos elsewhere
This isnt true. Finding these kinds of things is nontrivial.
For instance, I hadn't heard of him and neither have, for instance, my friend with a PhD in molecular biology who found this pretty interesting.
There is no such thing as spirit of the internet and hacking in general.
And never ever was. It was always range of attitudes and approaches. From the get go, with massive assholes thrown into mix seeking to damage for luls, with elitists, with gatekeepers and what not.
He even mentioned understanding why YouTube would be wary of his content, and mostly was unhappy that YouTube did not put some of their profits into being able to discriminate between home-made science and the quacks exploiting the vulnerable.
But none of this article made me even think the author believed YouTube had some sort of obligation to host content they did not wish to. He just wasn't happy with the decision they made.
Also funny to question whether the guy is acting on financial incentives to promote his ideas. Of course he does.
I think it's a valid one. Humans are really bad at fully separating and understanding our influences, never mind selectively ignoring some them.
Given the perception of many YouTubers as being more 'pure' ideological advocates, I think it's reasonable to bring up. Now to be fair, most any YouTuber that's likely to be a topic of discussion has almost certainly moved far past the point of actually being that 'pure' ideological advocate.
Not everybody is going to read the whole article. Not everybody is going to put 2 and 2 together while they're reading and reevaluate his previous statements with that new context.
I think it's pertinent information when evaluating his statements and coming to a decision about how you feel these things should be handled. I'd say mentioning his financial interest is more akin to a pointed summary
That said I don't think this was an attempt a deception, nor particularly deceptive given all the context. But still, it's relevant. Some people would give an explicit disclaimer, others not. I don't think this is clear cut enough to say one way is absolutely correct.
What do you mean by this? Perhaps I haven't searched deep enough, but to me, YouTube these days consists primarily of:
- Content creators, who are in it absolutely for the money, and whatever channel they run is just an excuse to get people to view ads (including product placement, and ads for creator's Patreon);
- Conspiracy nuts, who may or may not also be in it for the ad money;
- People reposting copyrighted content without having the copyright (i.e. the category that's responsible for YouTube's success in the first place);
- Media companies posting copyrighted content legally to take over the ad revenue stream.
There's some sprinkling of people who genuinely want to talk about their hobbies or ideas, without optimizing it for monetization. And here and there someone uploads some random video to share with friends. But the way I experience YouTube, almost all content creators are either marketers or wannabe marketers.
But that's where people end up, not where they begin. Usually. At least that's the perception, for the kinds of channels you probably find most of the HN folks watching - including the channel ran by the man in question.
Somebody starts a channel about something they're passionate about, and are able to bring something to the table in relation to it, which lets them make some really cool videos that most people couldn't do.
Over time, things start moving in the direction you mentioned as it turns from a passion project they expect to go nowhere into a living. However I don't think most viewers exactly see it this way. I think there are a variety of reasons for this, including in particular just how far the medium pushes parasocial relationships. The slow change is also a factor for both the audience and the YouTuber themselves, I think.
I think that the author would agree with you here- but it’s still worth calling out as something that changes if we move from a centralized content production model like PBS to distributed with moderation like YouTube.
I think that they get the “cause” wrong, attributing too much to the “cult” of mainstream science, when it seems like a logistics problem, but the effect is still that no one is able to do a deep dive on validating content. The best they can do is diagnose how far content seems to be from the mean.
We shouldn't pretend someone with a significant following is banned or demonetized because The Algorithm Made Me Do It™.
YouTube has the resources to verify if a popular streamer is a genuine PhD and formerly worked at NASA. Far smaller companies do these kinds of vefrifications all the time. It just chooses not to, even in the process of banning an account.
Note I'm not arguing those credentials alone should mean very much. I just question in general, the argument that companies can't "realistically" take certain measures because it wouldn't scale. Most of the time, they don't take them because they're neither required to and it wouldn't improve their bottom line.
That said, I think only a very small proportion of videos would take that amount of verification, most could be dismissed far more easily.
Are you sure? It sounds nice, but I think in practice it would be overrun with quacks selling herbal remedies that claim to make your boobs bigger, just like email. So instead of censorship being built-in, you get censorship tacked on after-the-fact. In the world of search engines, bad content drives out good.
Alternatively, you have a dumb pipe that takes care of hosting video without trying to host things like comments or recommendations, so even if you upload spam nobody actually sees it unless they find it elsewhere. But that’s not a solution to spam. It’s just making it somebody else’s problem.
As for censorship, I'm sorry but I don't consider what even a corporation whose operation is on the scale of YT or FB does to be censorship.
User A posts content. User B subscribes. Entity C decides that B is not allowed to read what A has to say.
C is a censor.
The type, scope, and implementation of the censorship matters. Hackernews removing spam comments is different than YouTube removing this guys account which is different from YouTube removing any video that is critical of google. And all of those are completely different from Thailand arresting anyone who criticizes the king.
If you insist on saying all of these things are the same and if you are against one of them then you have to be against all of them, then you aren't going to get much support for fighting censorship.
Censorship is natural and expected in many, many contexts. Any time a parent punishes their child for saying something they feel is unacceptable, that's censorship. It's just censorship in the hope of teaching their children how to be responsible members of society.
The difference between censorship at the individual or group level and at the government level is that with individuals or groups it's possible to find to other people or groups (or create your own) where that speech is not censored. At the government level, where they can control all aspects of expression, that might not be possible, which is why it's more of a problem and why it's set as a fundamental right in some countries.
YouTube does not control all expression through video. There are both other video streaming platforms, and peer delivery networks that can have platform front ends, and social media networks (which YouTube might classify to some degree as also) that allow video dissemination but are operated differently and have different restrictions.
YouTube is censoring people, just like it has always done since day one, and in new and evolving ways as their policies change. That's expected, and legal, and the same thing every other commercial platform does, and if people have a problem with a specific type of censorship YouTube performs, they should give their attention to a platform that doesn't, but not just because there's censorship at all, because of course there is.
For almost all intents and purposes, the audience is on YouTube only. Use anything else, and you are basically guaranteed to divide your audience by two orders of magnitude. Such is their monopoly on videos that are over 1 minute long.
This makes YouTube deplatforming very close to actual government censorship. The only meaningful difference is the lack of due process.
The audience is wherever there's a link to click on. Anyone who can watch YT can watch vimeo or peertube or even a self-hosted video. There's no problem with that click - the problem is getting people to click.
You want YT to do more than host videos, you want them to market videos to their users. If YT was a completely passive video host (i.e. when you watched a video, there were no links to other videos at all), saying "the audience is on YT" would be meaningless - people could watch YT videos all day and would never ever see a link to any video that they didn't learn about via some other mechanism. What you seem to object to is YT removing material from participating in "the algorithm", which is essentially a marketing process.
This is where the audience could be.
I don't know the solution to be honest. But the reality is, if a video is not on YouTube, it will not reach a wide audience. If a popular channel gets removed from YouTube, few people will ever watch it again. In most cases this means short term bankruptcy.
People could click. People could follow. But they don't.
But that's not because of any technological issue with watching videos hosted anywhere else. There are absolutely zero technological obstacles to people watching videos elsewhere.
The reason it doesn't reach a wide audience is because of "the algorithm" (or rather, two algorithms):
1. the one that YT uses to put possible videos for you to watch in front of you while you watching something else
2. the fact that people tend to search on YT and tend to share YT links rather than links to other video locations (a "human procedural algorithm", if you like)
I don't see how you can equate the power that this "gives" YT with governmental power. There is nothing stopping anyone from doing things outside of YT other than their (generally incorrect) belief that being unable to leverage "the algorithm" is death.
> If a popular channel gets removed from YouTube, few people will ever watch it again.
There's no right to use YT's algorithm or network effects for your own benefit. Does that give them great power? It does, yes. Is that like a government? I don't think that it is.
On the other hand, some important topics are aimed at a general audience: news, politics, scientific popularization, infotainment, entertainment… People don’t actively seek out those things, they stumble upon them and select what they like… with the help of the "algorithm".
Whether that’s a good thing is another matter. The way YouTube works is eerily close to doom-scrolling, and I’ve lost a lot of time there. My point is, to even have a chance of reaching a general audience, right now the algorithm and people’s behaviour is such that the only place is YouTube.
Now I’m not saying that everyone deserves to reach a wide audience. For one that’s flat out impossible (10 minutes watched by 100K people means over 13 years of total watching time), and most content is either niche or crap anyway. High quality content however, that makes a positive impact on the world (for instance by helping, informing, or entertaining people), does deserve a chance.
Does it deserve any particular way to the top? No, of course not. But I do submit they deserve at least a fair chance of being widely watched. And right now, again, that only chance comes from YouTube.
As for what we should do about it, I see two routes. One is regulation. We could officially recognise that YouTube basically holds the only meaningful key to an important kind of public discourse, such that any video they refuse to show is effectively censored. This makes them a public utility, and should be treated as such. I’m not sure what that should entail for the search & suggestion algorithms, but it sure means that taking down a video is an infringement on Free Speech, and so should be approved by a judge.
Yeah, that will never work out. Too many videos to process, not enough judges. So I think we’d much better take another route, if at all possible: find a way to severely reduce YouTube’s market share, and have actual competition between many platforms. That way if someone is kicked out of one platform, they can still use another. Or we could expand self hosting, or multiply the PeerTube instances…
What I absolutely do not want (though unfortunately it looks like we’re headed there), is the kind of regulation where YouTube is mandated to filter videos, in such a way that the only solution is an unsustainable level of automation with lots of false positives, no due process, and no way to appeal (like right now in fact, only it’s official).
A better regulatory route would be giving platforms a clear choice: either behave as a utility, which means utter neutrality: no integrated search or suggestions unless it’s demonstrably neutral, no filtering, and no arbitrary take down; on the upside, if they happen to host illegal content, they’re not responsible until a judge tells them to shut it down.
Or, retain the biased searches and suggestions (they do have value), all the filtering they want, arbitrary take downs and bans… but then they are treated as editors, and become responsible for everything that happens on their turf. If someone manages to publish some illegal content, they are penally responsible and may in extreme cases go to prison.
Either you’re a carrier, or you’re an editor. Under that rule, YouTube would either become a mere host, at which point we don’t even care about their market dominance (though I suspect their market share would drop as they’re deprived of most of their network effects); or they would become an editor, and that’s so unscaleable they’d need to shrink like hell to be sustainable. And again, they’d lose market share and we’d get the diversity that is needed to make sure that being banned from one platform is in practice very different from actual censorship.
2) I don't agree that not-on-YT means no wide audience. If I have 2M twitter followers, and I post a link to a video on (say) Vimeo with a sufficiently click-baity description, that video is going to get "a wide audience" (or at least, a large one). There are ways of alerting people to a video's existence beyond YT's own algorithm. The original (hah!) meaning of "went viral" didn't mean "YT recommended it to lots of people and they all clicked". It meant "link got shared by lots of people in an ever-expanding tree of contacts". That can still happen.
3) What's hard are the videos that fall in between the cracks. Not "my kid's 3rd birthday party singalong from last week", and not "the latest video from whomever the current k-pop phenom is". Videos that get, say, 200,000 - 2M views.
4) It's unclear how much view counts on YT are impacted by channel subscriptions vs the algorithm right now. Supposing for a moment that they are heavily correlated with subscriptions, YT could drop the algorithm and not see gigantic shifts in view counts. But I have no idea whether that's true, and I can think of lots of reasons why it may not be. A service that only shows/points you at videos from channels you've subscribed too is radically different from the YT of today.
They’d have to remove shadow banning at the very least. I’m also virtually certain that their algorithms are machine-learning based, and as such very difficult to inspect, debug, and other ML based algorithms have proven to be biased in other contexts (like facial recognition).
> I don't agree that not-on-YT means no wide audience.
I… stand corrected, I guess. Also, I’ve just discussed it with my partner, and she pointed out a marginal exodus away from YouTube, which may amplify and chip away at their dominance.
> A service that only shows/points you at videos from channels you've subscribed too is radically different from the YT of today.
It would be indeed. YouTube is significant in the way it merges 3 tools together: hosting, search, and recommendations. They could be separate. (By the way, Google itself tends to merge search and recommendations, the famous "filter bubble".)
You can't come into my print shop and print Nazi literature. Could you print it yourself? Sure. So I'm not preventing you from printing it - I'm preventing you from printing it on my equipment.
I.E - No, that's not censorship.
When a government censors, it is (generally) saying "You may not say or print this in our society".
When a parent censors a child, they are generally saying "You may not say this within our family".
When a particular social group censors a member, they are saying "You cannot say this within our social group".
When a social media platform censors someone, they are saying "You may not say this (or anything else) on our platform".
One of these things is not like the others.
(And when the social media platform (google) owns not just the video site but also the search engine and the browser - it starts to move closer to the government/parent example.)
We traditionally grant children less rights (at least as far as self-determination) than adults, so this aspect is really in keeping with broader cultural norms.
And sure, the google.com/YT/chrome empire does move closer to the government example, but how much closer is a matter of some considerable debate. I'd argue not by much, but I know others would disagree.
Regarding YouTube, I think the author would agree with you:
> The problem is big tech companies making billions of dollars aren’t capable of doing basic analysis of scientific work, or hiring a team that can, which is why the best they’re capable of on the pandemic front, for example, is attaching a link to the CDC website on every post that mentions “Covid” or “vaccine.”
I don't think we need to assign blame to YouTube here, but it's still the reality, and we should consider what it means for access to science.
Maybe we (as a society) should in fact keep experimental science off YouTube. (I don't agree, but I can see an argument.) Even in that case, the decisions made here are disproportionate. The author starts the article by describing how he's banned from even logging in, not just uploading; he ends by saying how he has to worry about being locked out of his email. I agree with the author that we're in dangerous territory by giving companies unilateral control over this process, even if we do it for good reasons.
I mean I see your point, but this kind of enforcement seems random or worse (take down is maybe effected by complaints of lobbyists- chemistry has maybe fewer of those)
No disrespect to Nike Red/blue, that’s a great channel.
This 1000 times. I keep having to tell people this and it's frustrating due to the fact that people who parrot "just trust the science" are acting more like religious zealots. "Trust the science" is just another way of saying "just have faith".
https://reason.com/2021/06/16/why-did-youtube-remove-this-re... ("Why Did YouTube Remove This Reason Video?")
edit: Here's a complete transcript of the censored material, post by Eugene Volokh
"The Crime of Curiosity" is a great way to put it, because we're already banned from questioning a lot of areas of science on most major tech platforms. This system seems to be helpful in some areas until it makes some mistakes, in which case the effects are catastrophic.
Remember that within the first year of covid, "masks work" was considered misinformation along with "a vaccine is likely to happen within a year or two", along with "this may be related to a lab leak", along with... Reality is always changing and uncertain and our policies should reflect that we do not have it all figured out, nor will we (collectively) ever. (Edit: as one commenter expressed skepticism of the mask claim, read over a link like https://old.reddit.com/r/AmItheAsshole/comments/fe2oqg/aita_... about how normal people felt about masks in the first few months of covid. It's pretty shocking and I feel like I'm living in an alternate reality just re-reading it and the top responses).
Now that our infrastructure is being expanded and built out with censoring of 'incorrect' information as a top priority, I fear for how bad the mistakes we make in the future may be.
I fall pretty strongly on the side of combatting misinformation, but I disagree with outright removal of content. I think it should be flagged as misinformation, but left available. Put another way, I'm a fan of labelling, not censoring.
I want transparency. "Our misinformation bot rated this as a 90% chance of being misinformation because XYZ." I bet the only reason we can't have that is because the ML bots suck so much the tech industry is scared to implement any system that might be open to scrutiny or analysis. It's a bit ironic.
Very much like spam: filters are good but not 100% good, so there must be a way to look at what the filter has rejected, and allow the reader to judge.
I've had to deal with Microsoft outright dropping everything from a domain (not on this server), so I do know it happens, I'm just not convinced it's necessary.
> I'm just not convinced it's necessary.
Well yes in some special cases dropping mails is not necessary. My mistake to assume your case is one of the common cases.
Google has some very advanced natural language processing technology. Try this Google search: "What year did Neil Armstrong land on Mars?"
https://news.ycombinator.com/item?id=28003635 ("Ask HN: Googler/YTer Able to Help with CIDRAP's Dr Osterholm's YT Strike?")
"Portfolio company" has nothing to do with how we moderate HN, except that we moderate less in such cases. Explained many times over the years: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
 In this case, both community and moderators.
Imagine the US five years from now, with Team Trump in charge of the censoring.
The clock is ticking on open source, decentralized solutions. Nothing else is relevant.
I also remember experts expecting effective vaccines by the end of 2020 already in January or February 2020. The key scientific challenge in vaccines was never about developing them but about collecting sufficient evidence that the vaccines are effective enough and safe enough in diverse populations. And beyond that, getting the vaccines approved and distributed to billions of people before the pandemic is over were even bigger challenges.
One thing contrarians often miss is that science is fundamentally about building and changing the consensus. It's not about convincing yourself that you have discovered something new, but about trying to convince yourself that you are wrong. And failing to do that, trying to convince the relevant audience that you have in fact discovered something new. Believing in something is a matter of faith. Convincing a skeptical audience that is nonetheless open to new ideas is science.
Or, in other words, those platforms have banned science. It doesn't matter what exactly their dogma is, science can not happen there.
This may not be a problem, we don't need every single platform to support science. But it's not aligned with those platforms PR.
Here's a version with a lot fewer deleted comments. If anyone wants to see what 1984s memory hole looks like this is it. You know what you remember but are told you're insane for remembering.
Thank god for the internet archive, and other archival sites, the retroactive modification of articles on the 'papers of record' have been particularly egregious in the last 6 years.
No, because that never happened. Maybe don’t spread misinformation yourself if you want to make a point.
> "The typical mask you buy in the drug store is not really effective in keeping out virus, which is small enough to pass through material. It might, however, provide some slight benefit in keep out gross droplets if someone coughs or sneezes on you."
> He added: "I do not recommend that you wear a mask, particularly since you are going to a very low risk location."
> Fauci has previously been criticized for changing his position on masks. Early on in the pandemic, he advised against wearing face coverings, but that advice evolved over time.
To the second point, if I am recalling correctly this was in part due to the fact that we were undergoing as massive supply shortage. I distinctly remember my roommates (healthcare workers) being told to keep their N-95s in a brown paper bag and to reuse it.
To the third point, everyone is a critic and everyone is bound to make mistakes in a rapidly evolving situation. Given the fact that front-line folks had to re-use (Read: already contaminated and unsafe for use) masks given the supply issues, I'm not so sure he was wrong in advising the general public to hold off.
I think the keyword that, in my mind, justifies the censorship here is "on most major tech platforms". Nobody is banning the discussion of these ideas in academic journals or HN or other places where curious people can go to discuss things - it's just making sure unverified, potentially dangerous theories aren't spreading like wildfire amongst the general population who _aren't_ curious and will assume whatever they're reading is absolute truth.
The general population is not equipped with enough information or enough time to meaningfully distinguish between bullshit and non-bullshit. Just yesterday, there was a post about a new long article/paper/publication by Stephen Wolfram, and the comments there made it clear that even among the academic community relevant to what he writes about, there's widespread disagreement about whether he's an egotistical empty vessel or the second coming of Einstein and Newton's lovechild (with a possible bias towards the former).
I'm all for elevating the agency and capabilities of "the general population", but pretending that a random discoverer of a video on self-use of CRISPR is going to be able to sensibly identify what may be of value and what may be dangerous is just incredibly naive.
And note: the problems do not arise with the general member of the public who decides to do their own deep dive after coming across an idea. If that was the universal response to encountering unfamiliar or controversial ideas, we might be in good shape. The problem is that most people have neither the time nor the inclination to do this, and so stuff just ends up floating around in their impressions of the world, unresolved, but perhaps casting doubt on stuff that is almost certainly true (or false).
How certain are you that you're not part of that particular subset of the 'general population'?
[EDIT: because of the work I've done, there are a handful of areas where I think my BS detection abilities would exceed those of an average person. But that's likely true for the average person too. Overall though, we're all a lot less able to detect BS than we think. ]
Do any of these facts support your point?
Mixing news into social media, sharing stories to Facebook, turned it from a coffee shop into a town square with a million soap boxes.
Bad actors then seized that format and used it to spread propaganda and disinformation.
That's not to say that the basic forms of social media were perfect, far from it. They're just as susceptible to manipulation. There's just something about Facebook's link to (mostly) real identities that elevates it to a hugely effective propaganda platform.
That only happened after Facebook decided to editorialize everybody's main view. People were not even used to sharing news at that point, but it was the only thing that Facebook allowed to spread.
Anyway, I'm not sure about causality. But claiming that fake news spreading through a heavily editorialized medium is proof that people can not handle self selecting their information is a complete non-sequitur.
That's a great point, I hadn't considered that. Thank you!
I'm saying that bombarding the general population with low-quality, dangerous information from sources that appear authoritative (what the tech giants are suppressing) is a bad idea. People take the information seriously and use it to cause harm to themselves and others.
There are a certain group of people that like to fuck around with conceptual arguments for the fun of it, but the public square is not the place for that kind of play.
>But theories aren't dangerous. Actions are dangerous.
Ideologies absolutely are dangerous. They've started wars, genocides, and cults. When does a theory stop being a theory and become an ideology?
I was banned on HN after questioning of QM and BB theories. 240 downvotes in one day, then ban.
The irony is that many of the people who claim to be free-speech see nothing wrong with this platform blacklisting people who don't share in the groupthink.
However, good discussion ultimately requires respect for one another, and it seems maybe that thinking is not bilateral in your case. I’d encourage you to introspect on why you may be finding resistance wherever you look.
What was read: "I've seen mods censoring stuff because of good discussion".
Sometimes you just want to give up.
Never heard it put this way before, but I think I agree with it.
We are, sadly, long since past the time that cloud providers, free content hosting, YouTube, etc, has to be considered hostile to anything outside the tech industry's consensus of what's allowed (as interpreted by their algorithms, which they like to pretend are fancy, but seem only barely smarter than keyword matching, except when they're rather dumber). Of course, due to Scale(TM), you can't actually have any humans in the loop. Unless, of course, you're well enough connected to get a bit of a rise on a tech news site, at which point a human will (usually) step in, mutter something about a mistake, fix the problem (the actual problem being the bad PR created), and go on their way.
If you're posting funny reaction videos to nonsense content, sure, use YouTube. For anything serious, this is no longer a good idea (well, if you're outside the tech industry consensus for whatever that is today).
But if you are even the slightest bit outside the mainstream, you probably shouldn't be using YouTube, or even the various cloud based hosting services. Your own server, in a local datacenter, perhaps fronted by CloudFront, is closer to the right answer anymore.
In the rush to free services, we've handed far, far too much power to a very small set of companies, who are now happy to use that power to turn the internet into only what they want to hear.
I'd say posting something at the intersection of fringe and dangerous can get your content canned, and unfortunately for the article author, "How to cook a COVID-19 vaccine in your kitchen" is in that category.
If something has become crystal clear with COVID is that there are secondary effects or unintended consequences coming from biological experimentation that could affect our life greatly.
In the same way as Wuhan's lab relationship with bats and COVID, there is a relationship between a laboratory that was experimenting with chimpanzees and viruses 30 kms near the origin of AIDS.
Playing at home could pose existential risk to society at large. You will have the best of intentions,like probably the Wuhan people had,but the road to hell is paved with good intentions.
They were not played at home. They were not amateurs. But, as you can see at 1:04 (see soldier in background), first responders just cracked bio-laboratories and stole equipment after blast, because they are "checking building for fire" (поетажная проверка на загорание).
I would love to see almost a Wikipedia version of YouTube, where videos could be submitted by anyone and facts/references could be linked at any point of the video.
Thanks for the connection, that book looks really interesting.
I'm also baffled why anyone would fail to understand that the practice of medicine is protected. And that vaccines require mass testing before certification and public release. Because every genome is different and it's not just about the fact that you rolled the dice on a new treatment and personally got lucky.
And that people who don't have PhDs - and many who do - can easily fuck up something as powerful as gene editing in countless ways, with fatal or life changing consequences for themselves and others.
Given the very poor reliability of software hacking, "biohacking" is insane, by definition.
When you have a population taking horse dewormer to treat Covid, handing out biohacking tools is like taping razor blades to the limbs of people who haven't learned how to walk without falling over just because some of them want to try something a bit different.
This isn't about "censorship" or victimisation, it's about common sense and the precautionary principle.
Most libertarians believe that we should let people do what they like provided they don't harm others, which includes experimenting on their bodies to change their gut flora or genes.
I say let people have their freedom, even if they use it in risky ways you find stupid, whether that's skydiving or anal sex or changing their genes.
It's not just that. The real message is that individuals are not able to make decisions or do things for themselves. This is why any medical treatment that doesn't involve surgery or prescriptions is thrown in with homeopathy and general quackery. It's not just companies either, individuals will look at you funny if you do anything not sanctioned by some perceived source of truth.
In a way, this dynamic is an effective, natural check on moderation. If moderation primarily removes stuff that most people think is junk, then alternatives competing on reduced moderation will be junk. If moderation removes valuable stuff that lots of people want, then other platforms that host it instead will receive more traffic and legitimacy.
It's kind of like the Reddit vs. Voat thing. Reddit isn't great, but Voat ended up a cesspool, because Reddit's biggest exoduses were driven by hate groups.
If you're wondering if something unusual is online, you would always look on YouTube first. And then stay there.
This causes me to never be able to tell if I'm reading genuine frustration with the process of science or indeed just general quackery.
Maybe it would be good to list specifically what frustrations you have with what medical treatments and why. At this point I can only be sure it's not homeopathy.
But, you know, of course it's not homeopathy...
...energy stones though, that shit is legit.
No, you're not qualified to make that choice.
That's the line that makes me interested enough to read more and think about what the author is saying. Misinformation doesn't typically come with a disclaimer saying it might be wrong, so that tells me the author isn't being intentionally deceiving or spreading information where they have no clue what's happening.
Reading more, it actually piques my curiosity a bit. It reminds me of board level electronics repair before I saw Louis Rossmann's YouTube channel. I always thought it was impossible, but then you watch one video of him doing it and realize all the rhetoric about it being difficult, impossible, and dangerous is just that; rhetoric.
I don't know what the answer is in terms of dealing with misinformation, but one thing I believe fairly strongly is that letting large, private institutions decide what's fact and what's fiction is problematic. The "facts" will always align with their business interests.
I'd even say it's risky to let big companies like manufacturers get away with their lies. I KNOW electronic and appliance manufacturers are lying about the complexity of their products because I can watch a YouTube video and fix a lot of what they claim is unfixable or too dangerous to fix.
Considering that, WTF am I supposed to believe when I read an article like this and someone is saying that biotech isn't as complicated as the profit driven institutions claim? There's got to be some truth to it if they behave the same as the electronics industry, right? Or am I sliding into anti-vaxxer territory?
No matter what, I think it would be a great step forward if we stopped accepting deceit and disinformation as being a normal thing for large corporations and institutions to engage in. We have so many provable examples in the electronic repair industry alone that I think it's endemic. Maybe if the big companies didn't lie so much, there would be less distrust and less of an opportunity for people to spread misinformation and propaganda.
Note: I dont' think this guy should have been removed from youtube, and have no reason to doubt the veracity of his story, it seems plausible and interesting.
Is self-hosting the only option?
like it's great you want to make things accessible and help people take charge of more things in life, but I don't think most people want that - and even of those who do say they want that, few will still take the time necessary to actually understand stuff correctly
which leads to the related issue that caused all the regulations and regulators we have today, people falling prey to snake oil salesman and downright harmful products in the marketplace. just look at the state of "nutritional supplaments" today
to me, I think people should just be more aware of the limitations of what they can really learn given the constraints of day to day life and accept that specialization is necessary to making modern society function - thus putting their trust in the appropriate authorities when necessary
but even that is a idealistic goal so idk
People need to be given the information they need to make informed decisions on topics that impact them. I don't have the time or inclination to figure out if the levels of heavy metal exposure for welders in the third world are dangerous or not because it doesn't impact me. I have plenty of time to figure out if any chronic disease I get has promising treatments that haven't been approved yet. For the doctor it's a matter of a 9 to 5 job, for me it's a lot more urgent.
I just look at sci-hub and the citation web of any papers I find there.
You're saying that the solution to having an ignorant population is to make them even more ignorant for their own good. That is the type of thinking that got us the dark ages.
Just have institutions enforce labelling rather than banning.
If COVID vids had big banners put underneath them saying "disputed"/"unproven"/"speculative"/"known false"/"likely false" I'd be happy (as long as it includes a link to what standards they are using). Similarly, the FDA should be able to enforce accurate labelling stating what level of verification a treatment has had, not be able to ban it.
If people still want to buy the snake oil (maybe they like how it feels on their skin?) let them!
I couldn't care less for this shit. "Yes I understand the necessity of censorship. But not me. I've worked at NASA. I've published 'a number of papers.'" Lol.
> The problem isn’t my thoroughly detailed research, which I would love to have critiqued in good faith. The problem is big tech companies making billions of dollars aren’t capable of doing basic analysis of scientific work, or hiring a team that can, which is why the best they’re capable of on the pandemic front, for example, is attaching a link to the CDC website on every post that mentions “Covid” or “vaccine.”
Indeed. They are not capable of doing basic analysis of every video, or hiring a team that can. Billions of dollars aren't enough to hire a team to analyze every video on youtube.
There are situations where you have someone who truly has deep merit meeting someone who believes they do and who and tries to relate to the former, but is so far disconnected that they have categorically different experience which immediately disqualifies the latter, but the general public would be unable to recognize, yet it would take significant effort for the former to explain the situation to an extent the public could understand, and at risk to the emotions and ego of the latter. Surely you can relate to this situation to some extent?
As for your research: are you saying there are no forums where your work can be "critiqued in good faith"? Or just that YouTube isn't that forum? Because you're right -- it isn't.
This is probably the most unscientific timeline we are living in now where everything is politicized and questioning the "experts" is considered sacrilege.
Even Nicki Minaj is being harassed for doubting the party line https://www.foxbusiness.com/technology/nicki-minaj-says-shes...
> Using a broad set of issues from the American National Election Studies, we identify rapid growth in the correlations between political attitudes from 2004 to 2016. This emergence of issue alignment is most pronounced within the economic and civil rights domains, challenging the notion that current “culture wars” are grounded in moral issues.
If you kept it to a small group of customers, it would maintain it's value and be worth $$$$ to them potentially.
There is no better vaccine than the one for the current dominant variants. You can't really target future mutations because you don't know what the future mutations will be. You can imagine restricting the best, most current vaccine to the rich to limit escape mutations but besides the obvious ethical concerns, the virus will mutate anyways. Better vaccinate everyone with the best, because the less people infected, the less chances you give the virus to mutate.
Mass vaccination might have made it better for costs on tptb in the past. But, if that doesn't work because a sufficient amount of the population refuses to cooperate...
> I’m sorry, it’s complete bullshit. Science is not something to be believed or trusted. Trust is antithetical to science. Show me the data, and let me decide for myself.
Cool, agreed, but, these arguments sound identical to those made by anti-vaxxers who believe vaccines cause autism.
Promoting experimental (biohacked) treatments and guiding people through administering them is not compatible with the currently accepted ethical framework for protection of human subjects, _especially_ if it's done with intent of generating scientific knowledge.
That, I think, is a large factor in why government and corporate entities do not wish do be seen as supporting, even tacitly, biohacking on humans. If you do biohacking on yourself quietly, no one will really care, but publishing what you learned makes it scientific research, and if it was done to a human it's human subjects research. So yes, the root of the conflict probably is a lack of acceptance by the "official science cult", but this conflict does not seem arbitrary or frivolous. Extension of current approaches to human subjects research to accommodate a public DIY biology / biohacking community may be possible but seems tricky to do without accidentally deregulating the whole field.
> Extension of current approaches to human subjects research to accommodate a public DIY biology / biohacking community may be possible but seems tricky to do without accidentally deregulating the whole field.
This is incorrect. The ethical framework and laws surrounding human subjects research pertain to institutions, not to individuals. In the US, there is no human subjects research board for unaffiliated scientists/hobbyists that could approve or deny their research based on ethical considerations, as there is for institutions. The Belmont Report that established IRBs was specifically motivated by institutional abuse of the kind you mention, not general quackery. Marketing of quackery falls under the purview of the US FDA. There is no regulator for independent science. If you are independent and including others in your research, you are only bound by general medical ethics.
The National Research Act and the Belmont report were created in opposition to this attitude. Henry Beecher led the fight (in the 1960s, preceeding the NRA and Belmont report) to make including others in your research be subject to research ethics, not just medical ethics, and to enforce this with regulation. The regulation was focused on institutions because at the time research was done by institutions. Now, post-internet and post-genetics revolution, we're retreading the same ground but with different details. A different approach is therefore warranted, but a free-for-all is just as unacceptable as it was in the early 20th century. I understand that this is unwelcome and is certainly inconvenient, but institutional science succeeded in regulating itself, and biohacking can do the same.
What is the different approach?
1. Autonomy (informed consent), 2. beneficence, 3. non-maleficence ("do no harm"), 4. justice (distribution of treatment where it is limited) are the principles that bind doctors who are operating on patients. What principles would you expect an independent researcher to uphold beyond what a doctor currently upholds? All other factors considered, as long as these principles are upheld, a doctor is free to try a new surgery for the purpose of exploring its effectiveness, i.e. engage in research. This is not new; private doctors have been testing procedures of varying risk without an IRB since the time of these commissions. Any researcher who violates these principles--where there is a harm to a research subject--is almost certainly liable under criminal fraud and torts laws. It is not a "free-for-all", as you characterize it.
The ethical framework does not apply only to institutions. Ethics are for everyone.
Edit: If the prevailing attitude is that doing human subjects research as an individual ("biohacker") means research ethics can be ignored, because current regulations cannot punish you, it makes total sense that Paypal etc. would simply run away from it. Keep in mind that the article frames the biohacking effort as an equal alternative to institutional science. If it is to be seen as a legitimate and equal scientific enterprise, it needs to develop equivalent ethical governance.
Whatever you may think of that idea, surely the idea that YouTube, which reduces down to a bunch of people sitting behind desks, should get to decide what others watch is questionable. That is, they think a small group of people are wiser and more intelligent than everyone and should get to curate and censor content for the majority at will.
I'm not sure I'd agree. I think that's putting far too much faith in them.
YouTube's goal, roughly stated, is "More Hours Watched." This, in whatever form is appropriate, is pretty much the end goal of any social media sharing handwave etc platform. More eyeball-hours to sell ads to.
For a while, YouTube's algorithms (which I think are almost certainly too dumb to have any idea what a video is about) pushed conspiracy theory content for the simple reason that if you can get someone watching conspiracy theory content on YouTube, they're very likely to continue watching conspiracy theory content on YouTube. Of the people who watch video 1234, 30% of them then watch dramatically more YouTube afterwards, so the more people you can show video 1234 to, the more hours will be watched - think "paperclip maximizer," not "Muahaha, we will drive people down conspiracy rabbit holes!"
Of course, do this long enough, and eventually you have a problem - bad press coverage about how you're driving people down conspiracy rabbit holes. Whoops. But the problem here, from YouTube's perspective, isn't that you're driving people down conspiracy theory rabbit holes - it's the standard tech industry problem that you're getting bad press for it. Bad press is bad for hours viewed. So you fix the problem, and issue the standard tech industry appy polly loggy - "We are so, so sorry you caught us doing this and we will do the work to ensure that you don't catch us doing it in the future."
I don't think YouTube particularly cares about Covid misinformation or [whatever]. They care about the bad press from being seen hosting it, which might impact people's opinion of them and reduce hours watched.
To claim that the algorithms understand anything beyond the title or such is to claim a capability that has no evidence for existing.
I don't think this is true really. People aren't going to stop watching YouTube because it pulls others down conspiracy rabbit holes.
I think the real reason is that bad press is unhappy employees and less potential recruits.
You don't have to think that you're wiser or more intelligent than other people to recognize this, and to understand the danger posed by certain kinds of presentations (which includes ads, something I wish was easier to regulate without violating free speech)
That's been the value-add of YouTube since the day they implemented preference-learning and recommendations, and it continues to be one of their distinguishing factors in the marketplace of alternatives.
The law is clear: you can be denied service as long as the reason you are being denied is not your sex, religion, etc. [Protected classes].
If you go to a restaurant and they don't like you they can throw you out, and so can a hardware store, and a movie theater, etc. (Exempting the protected reasons).
Find another service.
This makes sense. Why should a company be forced to allow people on their property? Or use their property? They shouldn't be forced, as long as they're not discriminating.
If people want that, then fine, but they can't then be regulated as if we had a vibrant decentralized market. We need democratic control and certain guarantees rights.
People have self respect and don’t like being bossed around by nitwits and parasites in mega corporations.
With that in mind, got any other services that would be replacements?
Peertube, off the top of my head, would be the exact tool for the job.
Other services... Well I think you can still view YouTube without an account.
As for publishing, Vimeo?
The other issue here is the person was banned for life from using YouTube, which is a far cry beyond having some of his content removed for not fitting the platform.
With great power comes great responsibility. The banhammer is being wielded too sloppily and that is hurting people in real ways, because parts of the web are now of central importance to daily life. Private companies in other sectors that can similarly impact people's lives are regulated in what they can do to their customers.
That’s curiosity and science, too.
Fortunately, he lost interest after having successfully bred the mold, and never carried the attack out. But you know what, I don’t want kids and sociopaths having easy access to biotech at home. Who knows what hacks will ensue that could devastate crops or economies. The OP says that is a small likelihood. The truth is he has no model that allows rational computation of the risk. The truth is he has faith.
The author of the post understands the practice of science but not the sociology of trust that goes with the community of scientists. I don’t care how safe he declares his ideas to be, it’s not in societies interest to have “Joe’s homemade injectable health serum. You can trust ol’ NASA Joe” available at every roadside stand.
Meanwhile, ranchers are actively breeding antibiotic-resistant bacteria, everywhere, with outstanding success, and sending the resulting strains straight to your local supermarket. Thousands die infected by those strains. Maybe that is a better place to devote attention than a mail-order science kit seller.
We can disagree about where to draw the line. I think it’s obvious that genetic engineering should be strongly regulated. I think that is also obvious to you.
We agree that antibiotics are overused.
Luckily we're already protected from misleading and dangerous medicine with existing law. You're falsely pretending he's doing something worse than he really is. If you're so confident in your belief, why not just use truth instead of exaggeration?
I guess the laws you refer to are helping, because according to the story he’s been investigated for practicing medicine without a license. But why not do everything we can, as a society, to protect ourselves from irresponsible people?
I can see why you do that. It takes so little mental effort to participate in a discussion and still seem to say something.
I think we need reasonable regulatory limits on a lot of things, such as dumping toxic chemicals in a hole in your back yard, or abusing your children. But not everything; and not infinitely. Engineering mutants and “medicines” at home is generally too risky.