Hacker News new | past | comments | ask | show | jobs | submit login
The Crime of Curiosity (piratewires.com)
376 points by prostoalex 29 days ago | hide | past | favorite | 207 comments



I don't understand this social platforms.

Covid stuff and fake news aside, I reported maybe 20 times obvious scam videos, or posts with extreme hatred and threats of violence against a group of people and similar stuff where 99.99999% of us would agree that it should be removed without question, and only once they did remove it. All the other times I get a message that they reviewed my complaint, that it does not go against their standards/tos/bla bla, and that I can block the video/post/channel for myself.

And taking into account how many random stuff they ban proactively, it just does not compute in my logic board in the brain.


It's a legitimately hard problem. It's also something that they would rather not deal with and don't make any money from. Plus, it's basically unregulated and platforms mostly don't disclose any stats on how they're doing, so there's no real accountability.

One way to think about it is guilt vs shame. Guilt is where you feel bad because you've violated your own standards. Shame is where you feel bad because somebody of standing has called you out. Platforms generally feel shame but not guilt, so most of their actual improvements in anti-abuse and TOS enforcement come from PR messes and other things that trigger shame. But when the heat is off and it's just you reporting something, they're not going to be particularly bothered.


Guilt, and shame, are both *human emotions*.

"Platforms," feel nothing.


It's just a bunch of people.

But you are right in a sense, that is is a large enough group that it's momentum will crush any feelings of individuals in the team.


Platforms aren't a bunch of people. They're algorithmic constructs being executed by humans and computers and other constructs. Facebook, Twitter, etc, lack any meaningful human in the loop, but have evolved rules and behaviors that are more or less independent of human judgment.

The only way to get a rational human back in the mix is through legislation or litigation. Given sufficient repetition, eventually those things are abstracted and encoded again, in a cycle seemingly designed to prefer nonhuman control systems.


The rules are still designed by an organization ultimately run by humans, and the outcomes are also measured ultimately by human decision makers.

The technical platform is still ultimately an extension of the organization, which is the entity that ultimately has human goals - mostly profit, and avoiding outcomes that threaten that profit.


People in groups are NOTHING similar to individuals when it comes to morality and/or ethics.


It's a metaphor. But so is "platforms". In some sense neither exists.

What actually goes on is an incredibly complicated set of relationships between thousands of people. We don't have a useful specific vocabulary for that, so we have to resort to metaphor. But I believe the metaphor is a valid one, and it's grounded in the actual emotions of key people making these decisions.


It's the same way ADHD is both under- and over- diagnosed. If you're bad enough at your job, you can both miss the majority of content that is harmful, and wind up taking down significantly more false positives than truly harmful material.


A few days ago, I learned that "Facebook has exempted high-profile users from some or all of its rules"[0]. Maybe there's something like that happening on youtube also.

[0]https://news.ycombinator.com/item?id=28512121


I'm sure it does something similar. The Big Media has utterly different standards on what counts as disinformation on Youtube, what was the last time any official media channels got de-platformed?



That’s not being deplatformed.


"or posts with extreme hatred and threats of violence against a group of people and similar stuff where 99.99999% of us would agree that it should be removed without question"

Probably not worth debating this further, but I would contest that number.

I lean very much towards unrestricted flow of information (and I know, there are way more radical ones around here). Not total unrestricted, with excemptions like CP, but my default is also just not look at content I find disturbing. Which means I basically avoid most of those social platforms altogether, as I indeed find most the bs posted there disturbing.


> I reported maybe 20 times obvious scam videos, or posts with extreme hatred and threats of violence

Back when I had a Facebook account, I just started reporting all ads as sexually explicit. They always came back with "no it isn't" but it took a while. And I stopped seeing ads.


Most things are badly done. Most social media companies are not doing moderation well. Fitting the general pattern of most things are garbage. But some things are not! Find them and treasure them.


All social media companies are 100% evil. They are the "bad guys". They are pro-totalitarian, pro-collectivist. They are what has always been wrong with humanity - those who embrace evil.

If you work for these companies and don't quit for simple ethical and moral reasons, YOU are part of the problem and aiding and abetting evil!


"Democratizing genetic engineering won’t suddenly unleash bioterrorism upon the world."

How sure is he of that, and why? As a comparison, nuclear power has been of great utility for many countries, but I sure would not want to see it "democratized".


Yeah I was with him up to that point.

"None of the kits we sell contain anything dangerous, nor is the average person experimenting with biology inherently dangerous. If you are trying to engineer something hazardous — like say a bat virus — you might have a problem, but the genome search space is large enough that accidentally creating a harmful organism is astronomically improbable. Access to most dangerous materials are also heavily restricted..."

This sounds really sketchy to me. I can't tell if he's lying or if he really can't see any further than his own research program.. sometimes a field of research is just actually existentially dangerous for life on earth, you can't handwave it away.

The possibility of biohacking as a hobby is thrilling. Trouble is, the dangers are so extreme they're difficult to even think about. Like, thought experiment: can you design a virus that kills all eukaryotic cells? Can you think of other possibilities equally terrifying? Of course stuff like that is far off, but neither you nor this guy know whether 'far off' means 10, 100, or 1000 years.

We already have examples of substances that are strictly harmful to almost all life.. Dispersal of them (via the democratization of access to Chemistry) is already placing stress on the biosphere.. which again, is such a large and horrible thing it's hard to imagine or reason about.

Another thing: it is still plausible that covid was a lab escape. I think the biohacker community should be incredibly humbled by that, and really think about how they'll be seen in decades to come.. we've seen cycles of technologists becoming disgraced because of the impact of their products; those cycles seem poised to accelerate. Maybe this is a good time to get on the right side of history?


Do you see any comparison to Machine Learning? I get the impression he considers himself like a hobbyist running some models on their home GPU. Without access to server farms, there is an almost zero percent chance a home hobbyist invents an AI capable of displacing large amounts of human workers or setting off nukes on its own. AI is dangerous, but can be explored safely by a small practitioner. Perhaps these biomaterials are more sensitive though.


Live biomaterials are self-replicating. That's one difference between biotech and any other scary tech we're used to dealing with, like explosives, dangerous chemicals or nuclear material. Even ML scales only as fast as you can buy compute and convince companies and governments to put your algorithms to use. Self-replication is another game entirely.


Ai isn’t really at risk of escaping the lab and afflicting a population with something. I mean, deep fakes went online and that will afflict people, but that’s not the same: the closer parallel is “a technique for bio hacking escaped his computer,” which is different than “I accidentally/sort-of-not-accidentally made mosquitos extinct in my region


I think the current AI threat is less "it's too intelligent and takes over", but rather "it's too stupid but we force it to take over anyway".


Or it’s just the second coming of CIH, and a bunch of computers get bricked.


Democratization of chem isn't responsible for the dispersal, the lack of regulation and proper labeling is. This occurs at the manufacture level not the level of research chemists.


If it turns out that SARS-COV-2 escaped from a lab, that will absolutely be at the level of research (bio)chemists.

The problem with individual biohacking is that there's not even oversight by colleagues. Maybe I'm overly optimistic, but I'd expect someone working near/with a biochemist who was doing crazy stuff to speak out about it. There's no real equivalent for this if the work is going on in somebody's basement or workshop.


It's hard enought to get any of that stuff to work when you have a fully-stocked lab and a raft of PhDs poking at it from all angles. Some Joe in his garage might get something interesting to happen, but it is a long, long way from there to virulence. The viruses themselves are banging away at this all the time from literally trillions of different angles; how often do they land on a winning ticket?


What's the difference between 0% and 0.0000000000000000000000l%?

0% never happens. .0bar1 can happen on the next pull of the lever.

Never, ever, ever underestimate the danger of small odds and just one...more...pull.


The meaningful comparison is never against zero, but against the next actual number. If Joe is 100x less likely to produce something virulent than this year's genetic lottery, the difference Joe makes is much less than between this year's and last, and thus wholly negligible. Expecially since 100x is far more lucky than Joe is.


I don't why he did not push this to undeveloped country, there are so many poor professor need cheaper cost of experiments


A motivated party can deploy and detonate several bombs in places with a lot of people any day, Covid rules notwithstanding.

It's laughably/terrifyingly easy, the hardest part is building or buying the bombs.

But you don't see it done regularly at all.

Granted, a single nuclear or bio attack would affect way more people.


It is far, far easier (and perfectly safe!) to get there by selling a popular product that poisons millions of people.

All the lead paint and tobacco company executives retired to their yachts, scot free. The fats hydrogenation people responsible for millions of heart failures are walking around loose to this day. Nobody currently responsible for slow, painful deaths via liver damage from sugar poisoning, in the hundreds of thousands each year, is looking over his shoulder.

But people still talk about the Zodiac Killer (who I have no factual reason to believe is Ted Cruz).


The fossil fuel company executives are still walking free, too. As is Ted Cruz (who I have no factual reason to believe is the Zodiac Killer).


I'm not looking forward to the day when I need to choose between privately developed immune system installs for my body. I think this is his vision even though he doesn't go so far to say it or perhaps recognize it. There is some precedent already though, check out Comma AI for instance


but I sure would not want to see it "democratized"

Nuclear power is already "democratized" by design. The basic framework of the non-proliferation treaty is that signatories get to use nuclear power (and receive international assistance developing nuclear power) in exchange for a commitment not to develop nuclear weapons.


Until you can personally sign a non-proliferation agreement and then order processed uranium, nuclear power has not been “democratized” in the sense being talked about here.


Private entities supply nuclear fuel all over the world

https://nuclear.gepower.com/fuel-a-plant

my main point, though, is that nuclear energy is a fairly strange, highly-regulated-at-all-levels beast with its own long complicated history so it's not a particularly straightforward parallel to a hypothetical weapon-of-mass-destruction-enabling genetic engineering


Private entities under strict regulation, yes. Individuals in basements? Not so much. Any such person will be placed in custody in most countries if caught. I remember that happening more than once. I'd provide you a link but would rather not google the subject...


Yeah but requiring school does nothing to prevent abuse. For instance, Kim Jong Un went to university.

Just because people gatekeep knowledge, that doesn't automatically mean only good people will get access to it. It actually means that if bad people infiltrate the gatekeeping organization then they can actually prevent good people from participating


The author conflates two things. The right to biohack at home and Youtube's alleged obligation to let him broadcast his material.

The first one is okay, as long as we're not talking about experiments that may be dangerous to the public. (which he seems to categorically reject for some reason, while home made bioterrorism is a real threat), the second one just doesn't follow at all.

Youtube has no realistic way to tell whether someone producing home-made science on youtube is a phd ex-nasa biohacker who follows best practices or just a complete quack who tries to sell dangerous fake remedies to vulnerable people. In practice the incentive to promote the latter probably far outweighs the former in number given the huge, generic audience on Youtube.

77% of the Youtube audience in the US are 17-25 years old, it's not some niche forum for engineers and the notion that they can read scientific papers and weed out legitimate science created at home from misinformation is absurd. The correct platform for something like this is a separate forum or community where enthusiasts meet with some barrier to entry, not the mass media.

I immediately question the motive of someone who promotes individual science or 'hacking' and seeks the largest mass media audience. I think the motive is much more straight forward. The author has a company that sells genetic engineering kits to people and by banning him from Youtube that impacts him financially. I personally think Biohacking and eccentric science is cool, but the appropriate audience for the kind of things he did, like treat himself with CRISPR or replace his entire microbiome to treat IBS is probably a community more self-selected than HN, not Youtube, which is mainstream television for young adults.


This comment is so antithetical to the "spirit of the internet" and hacking in general that I'm in shock.

> 77% of the Youtube audience in the US are 17-25 years old, it's not some niche forum for engineers and the notion that they can read scientific papers and weed out legitimate science created at home from misinformation is absurd.

Only PhDs or engineers are allowed to be interested? You think only people with CS degrees should learn about programming/hacking? Only people with electric engineering degrees should learn about circuits?

Also 17-25 is mostly adults. Also, wouldn't the people who watch his videos be self-selecting? I'm guessing 99% of the demographic you mentioned wouldn't be searching for his video.

> The author has a company that sells genetic engineering kits to people and by banning him from Youtube that impacts him financially.

Probably. But so what?

> I personally think Biohacking and eccentric science is cool

But people in the 17-25 year old demographics are not allowed to find it cool and enjoy it?

Your argument is that it could be harmful and therefore everyone 17-25 should be barred from it. That's the argument I heard in the 90s to prevent young people from learning to code/hack/etc. I don't understand how the culture of censorship and patronizing paternalism snuck into the "hacking"/tech environment.


>that I'm in shock

You're in shock of the attitude that hackers should stay off mainstream commercial platforms and avoid pandering to a generic audience? You do realize why we're having this conversation on HN and not on Facebook right?

The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is. That's how it always works when people actually care, it's a good filter.

I'm not paternalistic, I'm opposed to grifters and attention-seeking. There has always been a barrier to entry in hacker culture because getting past it signals that you have some commitment and degree of willingness to learn, which is particularly important when we're talking about gene-editing yourself


> You're in shock of the attitude that hackers should stay off mainstream commercial platforms and avoid pandering to a generic audience?

I'm in shock because you believe 17-25 year olds should be barred from knowledge.

> You do realize why we're having this conversation on HN and not on Facebook right?

You seem to have a sense of superiority. Is HN better than facebook? I'm not so sure. You aren't better than anyone because you post here. I'm certainly not better than anyone because I post here. Certainly wasn't given any test or provided any credentials to post here.

> The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is.

This argument is even more shocking. What about the percentage of people who might have become interested if they were exposed to it? Do you know how many people developed interest in something because they were introduced to it? There are millions of minorities/women/men who didn't go into computer science because they simply weren't introduced to it.

> That's how it always works when people actually care, it's a good filter.

Because the hackers ethos is to make knowledge as difficult and impossible to reach?

> I'm not paternalistic, I'm opposed to grifters and attention-seeking.

Why didn't you say so. Why did you write so much about "17-25 year olds", PhDs/engineers, niche forums, etc? Grifters and attention-seeking isn't my cup of tea either. But they don't deserved to be banned. Also is grifting and attention seeking to under-17s and over 25s okay?

Your logic and thinking is what paternalists in the past did to women and minorities. It's why people prevented women and minorities from reading, writing and gaining knowledge because it would be too harmful to them. Nothing more anti-hacker than that.


Your parent did not say any of the things you say they did.


The thing you're ignoring is that available information shapes people. Would I be a programmer if I didn't know computers existed? Definitely not. It's not like one day people wake up and think "biohacking is cool, I'll go looking for 'underground' platforms that host content about it". People's interests are shaped by the content they are offered, ubiquitous monopoly platforms like YouTube have way more power than you might think if you view them as simple hosting platforms. Discovery is the real deal.


> The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is.

Oh, maybe you're right if you mean 2% of 2% for those who will manage to find his videos elsewhere


> The 2% of the young adults who are genuinely interested in science and tech and DIY maker culture will find him regardless where he is.

This isnt true. Finding these kinds of things is nontrivial.

For instance, I hadn't heard of him and neither have, for instance, my friend with a PhD in molecular biology who found this pretty interesting.


> This comment is so antithetical to the "spirit of the internet" and hacking in general that I'm in shock

There is no such thing as spirit of the internet and hacking in general.

And never ever was. It was always range of attitudes and approaches. From the get go, with massive assholes thrown into mix seeking to damage for luls, with elitists, with gatekeepers and what not.


I didn't get the impression the author alleged YouTube had any obligation to let him broadcast his material.

He even mentioned understanding why YouTube would be wary of his content, and mostly was unhappy that YouTube did not put some of their profits into being able to discriminate between home-made science and the quacks exploiting the vulnerable.

But none of this article made me even think the author believed YouTube had some sort of obligation to host content they did not wish to. He just wasn't happy with the decision they made.


Exactly. I read this as a much broader critique of the scientific community. As YouTube is seen as a mass media company, it's not hard to accept their content decisions. He didn't fight it. I also believe any company that markets themselves as tech industry should not be this cagey about their relationship to science. It's an ethical question not a legal one. Jumping to the legal aspect seems like a big misunderstanding of the piece.

Also funny to question whether the guy is acting on financial incentives to promote his ideas. Of course he does.


I don't think the financial incentives question was so much a sincere question as it was bringing up a concern about other influences on his position.

I think it's a valid one. Humans are really bad at fully separating and understanding our influences, never mind selectively ignoring some them.

Given the perception of many YouTubers as being more 'pure' ideological advocates, I think it's reasonable to bring up. Now to be fair, most any YouTuber that's likely to be a topic of discussion has almost certainly moved far past the point of actually being that 'pure' ideological advocate.


The article mentions about selling kits, so it's pretty obvious that his videos would be at least partially motivated by commercial aspirations. Yes? I called the comment silly as it seemed to be saying "I not only disapprove of this entrepreneur's target market, he is also employing deceptive marketing tactics"


I mean yes, but also no? And I think a lot of it depends on how much you want to take from the overall tone of the comment vs a particular statement in it.

Not everybody is going to read the whole article. Not everybody is going to put 2 and 2 together while they're reading and reevaluate his previous statements with that new context.

I think it's pertinent information when evaluating his statements and coming to a decision about how you feel these things should be handled. I'd say mentioning his financial interest is more akin to a pointed summary

That said I don't think this was an attempt a deception, nor particularly deceptive given all the context. But still, it's relevant. Some people would give an explicit disclaimer, others not. I don't think this is clear cut enough to say one way is absolutely correct.


> Given the perception of many YouTubers as being more 'pure' ideological advocates

What do you mean by this? Perhaps I haven't searched deep enough, but to me, YouTube these days consists primarily of:

- Content creators, who are in it absolutely for the money, and whatever channel they run is just an excuse to get people to view ads (including product placement, and ads for creator's Patreon);

- Conspiracy nuts, who may or may not also be in it for the ad money;

- People reposting copyrighted content without having the copyright (i.e. the category that's responsible for YouTube's success in the first place);

- Media companies posting copyrighted content legally to take over the ad revenue stream.

There's some sprinkling of people who genuinely want to talk about their hobbies or ideas, without optimizing it for monetization. And here and there someone uploads some random video to share with friends. But the way I experience YouTube, almost all content creators are either marketers or wannabe marketers.


Right. I wouldn't put things quite so extremely, and I think many YouTubers would honestly disagree with you. Which is why I think the GP's point was relevant in the first place.

But that's where people end up, not where they begin. Usually. At least that's the perception, for the kinds of channels you probably find most of the HN folks watching - including the channel ran by the man in question.

Somebody starts a channel about something they're passionate about, and are able to bring something to the table in relation to it, which lets them make some really cool videos that most people couldn't do.

Over time, things start moving in the direction you mentioned as it turns from a passion project they expect to go nowhere into a living. However I don't think most viewers exactly see it this way. I think there are a variety of reasons for this, including in particular just how far the medium pushes parasocial relationships. The slow change is also a factor for both the audience and the YouTuber themselves, I think.


> Youtube has no realistic way to tell whether someone producing home-made science on youtube is a phd ex-nasa biohacker who follows best practices or just a complete quack who tries to sell dangerous fake remedies to vulnerable people. In practice the incentive to promote the latter probably far outweighs the former in number given the huge, generic audience on Youtube.

I think that the author would agree with you here- but it’s still worth calling out as something that changes if we move from a centralized content production model like PBS to distributed with moderation like YouTube.

I think that they get the “cause” wrong, attributing too much to the “cult” of mainstream science, when it seems like a logistics problem, but the effect is still that no one is able to do a deep dive on validating content. The best they can do is diagnose how far content seems to be from the mean.


"Youtube has no realistic way to tell whether someone producing home-made science on youtube is a phd ex-nasa biohacker who follows best practices or just a complete quack who tries to sell dangerous fake remedies to vulnerable people."

We shouldn't pretend someone with a significant following is banned or demonetized because The Algorithm Made Me Do It™.

YouTube has the resources to verify if a popular streamer is a genuine PhD and formerly worked at NASA. Far smaller companies do these kinds of vefrifications all the time. It just chooses not to, even in the process of banning an account.

Note I'm not arguing those credentials alone should mean very much. I just question in general, the argument that companies can't "realistically" take certain measures because it wouldn't scale. Most of the time, they don't take them because they're neither required to and it wouldn't improve their bottom line.


Many quacks and grifters have PhD. Youtube would had to actually verify content.


And can you imagine how hard that would be in cases like this? Imagine having to take a week reading up on bioengineering and experimenting to be able to moderate one video! The stats would look terrible!

That said, I think only a very small proportion of videos would take that amount of verification, most could be dismissed far more easily.


Also, isn't "having a PhD" and "used to work at NASA" the exact same credentialist approach he's arguing against?


Agreed - the lesson here is platforms like YouTube have a lot more control than most assume. An easy to use, censorship resistant, video host seems needed.


> An easy to use, censorship resistant, video host seems needed.

Are you sure? It sounds nice, but I think in practice it would be overrun with quacks selling herbal remedies that claim to make your boobs bigger, just like email. So instead of censorship being built-in, you get censorship tacked on after-the-fact. In the world of search engines, bad content drives out good.

Alternatively, you have a dumb pipe that takes care of hosting video without trying to host things like comments or recommendations, so even if you upload spam nobody actually sees it unless they find it elsewhere. But that’s not a solution to spam. It’s just making it somebody else’s problem.


I'm sure. And yes, we need to solve that problem too. But it's more like a spam folder than a deletion. Like someone else on HN pointed out, "label, not remove"


Locals.com is such a platform and Youtube's censorship (right or wrong) has significantly increased its popularity.


They have precisely as much control as anyone familiar with US law and business practice should expect: total control.

As for censorship, I'm sorry but I don't consider what even a corporation whose operation is on the scale of YT or FB does to be censorship.


It's still censorship even if it's not the government.

User A posts content. User B subscribes. Entity C decides that B is not allowed to read what A has to say.

C is a censor.


Technically yes, but I think this sort of argument is trying to force people to see all censorship as the same.

The type, scope, and implementation of the censorship matters. Hackernews removing spam comments is different than YouTube removing this guys account which is different from YouTube removing any video that is critical of google. And all of those are completely different from Thailand arresting anyone who criticizes the king.

If you insist on saying all of these things are the same and if you are against one of them then you have to be against all of them, then you aren't going to get much support for fighting censorship.


There's the definition of censorship, and then there's people's perception of censorship as something the government does. Beyond that, it's not just that many people can't distinguish the two, but that they've merged aspects of each into the same thing in their head.

Censorship is natural and expected in many, many contexts. Any time a parent punishes their child for saying something they feel is unacceptable, that's censorship. It's just censorship in the hope of teaching their children how to be responsible members of society.

The difference between censorship at the individual or group level and at the government level is that with individuals or groups it's possible to find to other people or groups (or create your own) where that speech is not censored. At the government level, where they can control all aspects of expression, that might not be possible, which is why it's more of a problem and why it's set as a fundamental right in some countries.

YouTube does not control all expression through video. There are both other video streaming platforms, and peer delivery networks that can have platform front ends, and social media networks (which YouTube might classify to some degree as also) that allow video dissemination but are operated differently and have different restrictions.

YouTube is censoring people, just like it has always done since day one, and in new and evolving ways as their policies change. That's expected, and legal, and the same thing every other commercial platform does, and if people have a problem with a specific type of censorship YouTube performs, they should give their attention to a platform that doesn't, but not just because there's censorship at all, because of course there is.


> YouTube does not control all expression through video. There are both other video streaming platforms,

For almost all intents and purposes, the audience is on YouTube only. Use anything else, and you are basically guaranteed to divide your audience by two orders of magnitude. Such is their monopoly on videos that are over 1 minute long.

This makes YouTube deplatforming very close to actual government censorship. The only meaningful difference is the lack of due process.


This is even worse logic than some arguments in this sphere.

The audience is wherever there's a link to click on. Anyone who can watch YT can watch vimeo or peertube or even a self-hosted video. There's no problem with that click - the problem is getting people to click.

You want YT to do more than host videos, you want them to market videos to their users. If YT was a completely passive video host (i.e. when you watched a video, there were no links to other videos at all), saying "the audience is on YT" would be meaningless - people could watch YT videos all day and would never ever see a link to any video that they didn't learn about via some other mechanism. What you seem to object to is YT removing material from participating in "the algorithm", which is essentially a marketing process.


> The audience is wherever there's a link to click on.

This is where the audience could be.

I don't know the solution to be honest. But the reality is, if a video is not on YouTube, it will not reach a wide audience. If a popular channel gets removed from YouTube, few people will ever watch it again. In most cases this means short term bankruptcy.

People could click. People could follow. But they don't.


> if a video is not on YouTube, it will not reach a wide audience.

But that's not because of any technological issue with watching videos hosted anywhere else. There are absolutely zero technological obstacles to people watching videos elsewhere.

The reason it doesn't reach a wide audience is because of "the algorithm" (or rather, two algorithms):

1. the one that YT uses to put possible videos for you to watch in front of you while you watching something else

2. the fact that people tend to search on YT and tend to share YT links rather than links to other video locations (a "human procedural algorithm", if you like)

I don't see how you can equate the power that this "gives" YT with governmental power. There is nothing stopping anyone from doing things outside of YT other than their (generally incorrect) belief that being unable to leverage "the algorithm" is death.

> If a popular channel gets removed from YouTube, few people will ever watch it again.

There's no right to use YT's algorithm or network effects for your own benefit. Does that give them great power? It does, yes. Is that like a government? I don't think that it is.


OK we need a bit of nuance there: I do reckon there are many niches that don’t need to cater to a general audience to begin with. Specialised conferences on InfoQ are a good example: their audience is very specific, and is best reached through aggregators like HN or /r/programming. Another example is online courses, that are not casually watched.

On the other hand, some important topics are aimed at a general audience: news, politics, scientific popularization, infotainment, entertainment… People don’t actively seek out those things, they stumble upon them and select what they like… with the help of the "algorithm".

Whether that’s a good thing is another matter. The way YouTube works is eerily close to doom-scrolling, and I’ve lost a lot of time there. My point is, to even have a chance of reaching a general audience, right now the algorithm and people’s behaviour is such that the only place is YouTube.

Now I’m not saying that everyone deserves to reach a wide audience. For one that’s flat out impossible (10 minutes watched by 100K people means over 13 years of total watching time), and most content is either niche or crap anyway. High quality content however, that makes a positive impact on the world (for instance by helping, informing, or entertaining people), does deserve a chance.

Does it deserve any particular way to the top? No, of course not. But I do submit they deserve at least a fair chance of being widely watched. And right now, again, that only chance comes from YouTube.

---

As for what we should do about it, I see two routes. One is regulation. We could officially recognise that YouTube basically holds the only meaningful key to an important kind of public discourse, such that any video they refuse to show is effectively censored. This makes them a public utility, and should be treated as such. I’m not sure what that should entail for the search & suggestion algorithms, but it sure means that taking down a video is an infringement on Free Speech, and so should be approved by a judge.

Yeah, that will never work out. Too many videos to process, not enough judges. So I think we’d much better take another route, if at all possible: find a way to severely reduce YouTube’s market share, and have actual competition between many platforms. That way if someone is kicked out of one platform, they can still use another. Or we could expand self hosting, or multiply the PeerTube instances…

What I absolutely do not want (though unfortunately it looks like we’re headed there), is the kind of regulation where YouTube is mandated to filter videos, in such a way that the only solution is an unsustainable level of automation with lots of false positives, no due process, and no way to appeal (like right now in fact, only it’s official).

---

A better regulatory route would be giving platforms a clear choice: either behave as a utility, which means utter neutrality: no integrated search or suggestions unless it’s demonstrably neutral, no filtering, and no arbitrary take down; on the upside, if they happen to host illegal content, they’re not responsible until a judge tells them to shut it down.

Or, retain the biased searches and suggestions (they do have value), all the filtering they want, arbitrary take downs and bans… but then they are treated as editors, and become responsible for everything that happens on their turf. If someone manages to publish some illegal content, they are penally responsible and may in extreme cases go to prison.

Either you’re a carrier, or you’re an editor. Under that rule, YouTube would either become a mere host, at which point we don’t even care about their market dominance (though I suspect their market share would drop as they’re deprived of most of their network effects); or they would become an editor, and that’s so unscaleable they’d need to shrink like hell to be sustainable. And again, they’d lose market share and we’d get the diversity that is needed to make sure that being banned from one platform is in practice very different from actual censorship.


1) I think that YT could make a legitimate case that its recommendation algorithm is neutral (assuming that it's not actually biased by backroom payments etc.). That is to say: it doesn't represent any point of view about anything, and simply tries to show a user "more videos related to the one you're watching". The devil, of course, is in the details of what "related" means.

2) I don't agree that not-on-YT means no wide audience. If I have 2M twitter followers, and I post a link to a video on (say) Vimeo with a sufficiently click-baity description, that video is going to get "a wide audience" (or at least, a large one). There are ways of alerting people to a video's existence beyond YT's own algorithm. The original (hah!) meaning of "went viral" didn't mean "YT recommended it to lots of people and they all clicked". It meant "link got shared by lots of people in an ever-expanding tree of contacts". That can still happen.

3) What's hard are the videos that fall in between the cracks. Not "my kid's 3rd birthday party singalong from last week", and not "the latest video from whomever the current k-pop phenom is". Videos that get, say, 200,000 - 2M views.

4) It's unclear how much view counts on YT are impacted by channel subscriptions vs the algorithm right now. Supposing for a moment that they are heavily correlated with subscriptions, YT could drop the algorithm and not see gigantic shifts in view counts. But I have no idea whether that's true, and I can think of lots of reasons why it may not be. A service that only shows/points you at videos from channels you've subscribed too is radically different from the YT of today.


> I think that YT could make a legitimate case that its recommendation algorithm is neutral

They’d have to remove shadow banning at the very least. I’m also virtually certain that their algorithms are machine-learning based, and as such very difficult to inspect, debug, and other ML based algorithms have proven to be biased in other contexts (like facial recognition).

> I don't agree that not-on-YT means no wide audience.

I… stand corrected, I guess. Also, I’ve just discussed it with my partner, and she pointed out a marginal exodus away from YouTube, which may amplify and chip away at their dominance.

> A service that only shows/points you at videos from channels you've subscribed too is radically different from the YT of today.

It would be indeed. YouTube is significant in the way it merges 3 tools together: hosting, search, and recommendations. They could be separate. (By the way, Google itself tends to merge search and recommendations, the famous "filter bubble".)


Are you a censor if you own the platform?

You can't come into my print shop and print Nazi literature. Could you print it yourself? Sure. So I'm not preventing you from printing it - I'm preventing you from printing it on my equipment.

I.E - No, that's not censorship.


It is censorship, it's just that not all censorship is bad, and people should care more about the details than that the definition fits for a word they have knee-jerk reactions to.


"Censorship" always has a context.

When a government censors, it is (generally) saying "You may not say or print this in our society".

When a parent censors a child, they are generally saying "You may not say this within our family".

When a particular social group censors a member, they are saying "You cannot say this within our social group".

When a social media platform censors someone, they are saying "You may not say this (or anything else) on our platform".

One of these things is not like the others.


The parent and the government ones are similar because the censored individual can’t easily switch government or switch parents.

(And when the social media platform (google) owns not just the video site but also the search engine and the browser - it starts to move closer to the government/parent example.)


I'll grant both points. But ...

We traditionally grant children less rights (at least as far as self-determination) than adults, so this aspect is really in keeping with broader cultural norms.

And sure, the google.com/YT/chrome empire does move closer to the government example, but how much closer is a matter of some considerable debate. I'd argue not by much, but I know others would disagree.


What I got from the article was that system as a whole is consistently throwing roadblocks in the way of experimental/small-time science. The author focuses on YouTube, but also cites a government investigation as well as deplatforming by PayPal, Square, LinkedIn, Amazon, Facebook, and Patreon. These are private companies, with their own policies, but private companies are part of the system too. So I don't think the concerns are conflated.

Regarding YouTube, I think the author would agree with you:

> The problem is big tech companies making billions of dollars aren’t capable of doing basic analysis of scientific work, or hiring a team that can, which is why the best they’re capable of on the pandemic front, for example, is attaching a link to the CDC website on every post that mentions “Covid” or “vaccine.”

I don't think we need to assign blame to YouTube here, but it's still the reality, and we should consider what it means for access to science.

Maybe we (as a society) should in fact keep experimental science off YouTube. (I don't agree, but I can see an argument.) Even in that case, the decisions made here are disproportionate. The author starts the article by describing how he's banned from even logging in, not just uploading; he ends by saying how he has to worry about being locked out of his email. I agree with the author that we're in dangerous territory by giving companies unilateral control over this process, even if we do it for good reasons.


But yet, if Nile Red whips up something dangerous, that’s ok?

I mean I see your point, but this kind of enforcement seems random or worse (take down is maybe effected by complaints of lobbyists- chemistry has maybe fewer of those)

No disrespect to Nike Red/blue, that’s a great channel.


>And the whole "believe science" thing? I’m sorry, it’s complete bullshit. Science is not something to be believed or trusted. Trust is antithetical to science. Show me the data, and let me decide for myself.

This 1000 times. I keep having to tell people this and it's frustrating due to the fact that people who parrot "just trust the science" are acting more like religious zealots. "Trust the science" is just another way of saying "just have faith".


Also covered by Reason (whose reporting on Zayner was likewise censored),

https://reason.com/2021/06/16/why-did-youtube-remove-this-re... ("Why Did YouTube Remove This Reason Video?")

edit: Here's a complete transcript of the censored material, post by Eugene Volokh

https://reason.com/volokh/2021/06/17/youtube-removes-march-2...


Everybody back to your own domains and hosting (not fucking Amazon!). If your server provider bans you, then we have a real problem.


Then you rent a T1 line, and not from Comcast.


Or use framatube (torrent based video)


The potential costs of systemically removing correct but controversial/fringe content are extremely detrimental to scientific progress in general. Sure, we remove thousands of obviously false and likely harmful posts, and there are cases to be made for why that may be good, but sometimes hidden within those thousands of posts was something that may have had a tremendously positive effect, but yet could not be separated from the noise.

"The Crime of Curiosity" is a great way to put it, because we're already banned from questioning a lot of areas of science on most major tech platforms. This system seems to be helpful in some areas until it makes some mistakes, in which case the effects are catastrophic.

Remember that within the first year of covid, "masks work" was considered misinformation along with "a vaccine is likely to happen within a year or two", along with "this may be related to a lab leak", along with... Reality is always changing and uncertain and our policies should reflect that we do not have it all figured out, nor will we (collectively) ever. (Edit: as one commenter expressed skepticism of the mask claim, read over a link like https://old.reddit.com/r/AmItheAsshole/comments/fe2oqg/aita_... about how normal people felt about masks in the first few months of covid. It's pretty shocking and I feel like I'm living in an alternate reality just re-reading it and the top responses).

Now that our infrastructure is being expanded and built out with censoring of 'incorrect' information as a top priority, I fear for how bad the mistakes we make in the future may be.


> The potential costs of systemically removing correct but controversial/fringe content are extremely detrimental to scientific progress in general.

I fall pretty strongly on the side of combatting misinformation, but I disagree with outright removal of content. I think it should be flagged as misinformation, but left available. Put another way, I'm a fan of labelling, not censoring.

I want transparency. "Our misinformation bot rated this as a 90% chance of being misinformation because XYZ." I bet the only reason we can't have that is because the ML bots suck so much the tech industry is scared to implement any system that might be open to scrutiny or analysis. It's a bit ironic.


Marking is helpful. Removal is not.

Very much like spam: filters are good but not 100% good, so there must be a way to look at what the filter has rejected, and allow the reader to judge.


The mails that make it to your spambox are a pithy few compared to the mass of dropped mail. You'd be very unhappy with your spambox if the filters weren't so judgmental.


I haven't actually set up my mail server to move mail flagged as spam to a separate folder, and I've gotten roughly 4 spam emails in the year or so I've had this address. My mail server also doesn't drop anything unless it fails DMARC.

I've had to deal with Microsoft outright dropping everything from a domain (not on this server), so I do know it happens, I'm just not convinced it's necessary.


That may be because your address hasn't made it to the spammers yet. My address has been public for a while and not filtering meant deleting a few mails per hour. Even a Gmail-address I'm not using gets a good amount of spam, probably because other people have tried using it to sign up for accounts.

> I'm just not convinced it's necessary.

Well yes in some special cases dropping mails is not necessary. My mistake to assume your case is one of the common cases.


Yeah, because there’s less email spam now than there used to be.

https://en.wikipedia.org/wiki/Email_spam#Statistics_and_esti...


There is no such thing as a bot which can identify misinformation with 90% accuracy.

Google has some very advanced natural language processing technology. Try this Google search: "What year did Neil Armstrong land on Mars?"


Wow, unless you happen to know that the Sea of Tranquility is on the moon (seas on the moon, no way), then you are well on your way to believing that Armstrong is relaxing on a Martian beach....


Appreciate this perspective here. I'm against censorship, and am surprised how clear your "label not remove" concept is. I like it.


YouTube's ML bots flagged an episode of Michael Osterholm's podcast. If you're unfamiliar, that's a former COVID advisor to President Biden.

https://news.ycombinator.com/item?id=28003635 ("Ask HN: Googler/YTer Able to Help with CIDRAP's Dr Osterholm's YT Strike?")


Yeah the whole "censoring" stuff ratcheted up really fast. Kinda crazy.


[flagged]


The comment was obviously trollish. Please stop creating accounts to break HN's rules with.

"Portfolio company" has nothing to do with how we moderate HN, except that we moderate less in such cases. Explained many times over the years: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....


HN is not overly censoring, but it [0] is very disapproving of overly divisive and flaming content. Also, HN is not one person. And the comment you linked is not even negative at the moment.

[0] In this case, both community and moderators.


Curious to hear more about this. Is censorship "removal" or simply not getting votes? I've seen this concept generally addressed here before as "we don't censor, people just vote this way"


It was flagged by a mod.


flagging is something normal users can do.


and no longer is.


It's scary.

Imagine the US five years from now, with Team Trump in charge of the censoring.


Banning Trump from Twitter...the pendulum will indeed swing the other way. How long do we have.

The clock is ticking on open source, decentralized solutions. Nothing else is relevant.


From what I remember, the Western expert opinion about wearing masks in public changed from "a weird East Asian habit" to "something that could be useful" around March 2020. The debate continued beyond that mostly because the issue had already been politicized. And because people who are used to prosperity are often unable to handle the uncertainty of living through a major change. Too many people prefer experts who don't change their minds over those who do.

I also remember experts expecting effective vaccines by the end of 2020 already in January or February 2020. The key scientific challenge in vaccines was never about developing them but about collecting sufficient evidence that the vaccines are effective enough and safe enough in diverse populations. And beyond that, getting the vaccines approved and distributed to billions of people before the pandemic is over were even bigger challenges.

One thing contrarians often miss is that science is fundamentally about building and changing the consensus. It's not about convincing yourself that you have discovered something new, but about trying to convince yourself that you are wrong. And failing to do that, trying to convince the relevant audience that you have in fact discovered something new. Believing in something is a matter of faith. Convincing a skeptical audience that is nonetheless open to new ideas is science.


> because we're already banned from questioning a lot of areas of science on most major tech platforms

Or, in other words, those platforms have banned science. It doesn't matter what exactly their dogma is, science can not happen there.

This may not be a problem, we don't need every single platform to support science. But it's not aligned with those platforms PR.


https://web.archive.org/web/20200731213626/https://old.reddi...

Here's a version with a lot fewer deleted comments. If anyone wants to see what 1984s memory hole looks like this is it. You know what you remember but are told you're insane for remembering.

Thank god for the internet archive, and other archival sites, the retroactive modification of articles on the 'papers of record' have been particularly egregious in the last 6 years.


> Remember that within the first year of covid, "masks work" was considered misinformation

No, because that never happened. Maybe don’t spread misinformation yourself if you want to make a point.


Yes, he did. From [1]:

> "The typical mask you buy in the drug store is not really effective in keeping out virus, which is small enough to pass through material. It might, however, provide some slight benefit in keep out gross droplets if someone coughs or sneezes on you."

> He added: "I do not recommend that you wear a mask, particularly since you are going to a very low risk location."

> Fauci has previously been criticized for changing his position on masks. Early on in the pandemic, he advised against wearing face coverings, but that advice evolved over time.

[1] https://www.msn.com/en-us/news/us/fauci-said-masks-not-reall...


To the first point, isn't this still true? Unless I missed something, N-95s/respirators are the only way to protect yourself, wearing the cloth/surgical masks is a way to prevent yourself, who may be asymptomatic, from spreading the disease and NOT to protect yourself from someone who you are in contact with who has it.

To the second point, if I am recalling correctly this was in part due to the fact that we were undergoing as massive supply shortage. I distinctly remember my roommates (healthcare workers) being told to keep their N-95s in a brown paper bag and to reuse it.

To the third point, everyone is a critic and everyone is bound to make mistakes in a rapidly evolving situation. Given the fact that front-line folks had to re-use (Read: already contaminated and unsafe for use) masks given the supply issues, I'm not so sure he was wrong in advising the general public to hold off.


>"The Crime of Curiosity" is a great way to put it, because we're already banned from questioning a lot of areas of science on most major tech platforms.

I think the keyword that, in my mind, justifies the censorship here is "on most major tech platforms". Nobody is banning the discussion of these ideas in academic journals or HN or other places where curious people can go to discuss things - it's just making sure unverified, potentially dangerous theories aren't spreading like wildfire amongst the general population who _aren't_ curious and will assume whatever they're reading is absolute truth.


That's such an arrogant, condescending statement. You're assuming that the general population is too stupid to be trusted with unfiltered information. But theories aren't dangerous. Actions are dangerous.


This is total BS. Theories are dangerous. If they weren't, nobody would ever care about them. They are dangerous because they can be inspiring, sometimes inspiring action. You can say you don't want to ban the theory, only the actions, and that's a legitimate position. It reduces you to post-facto behavior only, however (which may be just fine for you). Many societies and people within them over time have not been fine with that. What they've done about it varies, from draconian bans on theories and punishment for those advocating them, to much milder versions such as "you can no longer use the services of a private company to talk about your ideas".

The general population is not equipped with enough information or enough time to meaningfully distinguish between bullshit and non-bullshit. Just yesterday, there was a post about a new long article/paper/publication by Stephen Wolfram, and the comments there made it clear that even among the academic community relevant to what he writes about, there's widespread disagreement about whether he's an egotistical empty vessel or the second coming of Einstein and Newton's lovechild (with a possible bias towards the former).

I'm all for elevating the agency and capabilities of "the general population", but pretending that a random discoverer of a video on self-use of CRISPR is going to be able to sensibly identify what may be of value and what may be dangerous is just incredibly naive.

And note: the problems do not arise with the general member of the public who decides to do their own deep dive after coming across an idea. If that was the universal response to encountering unfamiliar or controversial ideas, we might be in good shape. The problem is that most people have neither the time nor the inclination to do this, and so stuff just ends up floating around in their impressions of the world, unresolved, but perhaps casting doubt on stuff that is almost certainly true (or false).


> The general population is not equipped with enough information or enough time to meaningfully distinguish between bullshit and non-bullshit.

How certain are you that you're not part of that particular subset of the 'general population'?


I'm absolutely certain that I am part of the general population and share this characteristic.

[EDIT: because of the work I've done, there are a handful of areas where I think my BS detection abilities would exceed those of an average person. But that's likely true for the average person too. Overall though, we're all a lot less able to detect BS than we think. ]


A clear majority of people are more ready to reject things that are true than things that are false.


Roughly a third of the population believes in astrology and doesn't believe in global warming or evolution. A good proportion also believes that Democrats drink the blood of children.

Do any of these facts support your point?


I think social media has proven pretty conclusively that giving the general population unfiltered information and enabling everyone to promote whatever they want is in fact a terrible idea


How so? Social media only started being a societal problem after it became heavily filtered.


I believe that is not correct. Social media is fine in basic forms, such as Reddit/HN style forum threads, and Facebook's keeping-in-touch-with-old-friends and arranging events.

Mixing news into social media, sharing stories to Facebook, turned it from a coffee shop into a town square with a million soap boxes.

Bad actors then seized that format and used it to spread propaganda and disinformation.

That's not to say that the basic forms of social media were perfect, far from it. They're just as susceptible to manipulation. There's just something about Facebook's link to (mostly) real identities that elevates it to a hugely effective propaganda platform.


> Mixing news into social media, sharing stories to Facebook, turned it from a coffee shop into a town square with a million soap boxes.

That only happened after Facebook decided to editorialize everybody's main view. People were not even used to sharing news at that point, but it was the only thing that Facebook allowed to spread.

Anyway, I'm not sure about causality. But claiming that fake news spreading through a heavily editorialized medium is proof that people can not handle self selecting their information is a complete non-sequitur.


"That only happened after Facebook decided to editorialize everybody's main view. People were not even used to sharing news at that point, but it was the only thing that Facebook allowed to spread."

That's a great point, I hadn't considered that. Thank you!


>You're assuming that the general population is too stupid to be trusted with unfiltered information.

I'm saying that bombarding the general population with low-quality, dangerous information from sources that appear authoritative (what the tech giants are suppressing) is a bad idea. People take the information seriously and use it to cause harm to themselves and others.

There are a certain group of people that like to fuck around with conceptual arguments for the fun of it, but the public square is not the place for that kind of play.

>But theories aren't dangerous. Actions are dangerous.

Ideologies absolutely are dangerous. They've started wars, genocides, and cults. When does a theory stop being a theory and become an ideology?


> I think the keyword that, in my mind, justifies the censorship here is "on most major tech platforms". Nobody is banning the discussion of these ideas in academic journals or HN

I was banned on HN after questioning of QM and BB theories. 240 downvotes in one day, then ban.


You can't actually discuss anything on HN that goes against the moderator's petty ideologies (or yc's profit motive) without being censored. Turn on show dead and take a scroll through nearly any topic.

The irony is that many of the people who claim to be free-speech see nothing wrong with this platform blacklisting people who don't share in the groupthink.


I feel this is a dishonest take. I’ve seen the moderators leave up some pretty egregious violations of the site rules because it fostered good discussion, which seems to be at the heart of HN.

However, good discussion ultimately requires respect for one another, and it seems maybe that thinking is not bilateral in your case. I’d encourage you to introspect on why you may be finding resistance wherever you look.


[flagged]


What was said: "I've seen mods allowing blatant rule violations because of good discussion"

What was read: "I've seen mods censoring stuff because of good discussion".

Sometimes you just want to give up.


I've had showdead on for months, mainly to read old Terry A. Davis posts. Generally the people I see are actual trolls.


>Trust is antithetical to science.

Never heard it put this way before, but I think I agree with it.


> A day after YouTube took down my video, I received an email. They banned me for life. This is not only to say I could no longer upload content. I could no longer even login.

We are, sadly, long since past the time that cloud providers, free content hosting, YouTube, etc, has to be considered hostile to anything outside the tech industry's consensus of what's allowed (as interpreted by their algorithms, which they like to pretend are fancy, but seem only barely smarter than keyword matching, except when they're rather dumber). Of course, due to Scale(TM), you can't actually have any humans in the loop. Unless, of course, you're well enough connected to get a bit of a rise on a tech news site, at which point a human will (usually) step in, mutter something about a mistake, fix the problem (the actual problem being the bad PR created), and go on their way.

If you're posting funny reaction videos to nonsense content, sure, use YouTube. For anything serious, this is no longer a good idea (well, if you're outside the tech industry consensus for whatever that is today).

But if you are even the slightest bit outside the mainstream, you probably shouldn't be using YouTube, or even the various cloud based hosting services. Your own server, in a local datacenter, perhaps fronted by CloudFront, is closer to the right answer anymore.

In the rush to free services, we've handed far, far too much power to a very small set of companies, who are now happy to use that power to turn the internet into only what they want to hear.


Plenty of serious content survives on YouTube.

I'd say posting something at the intersection of fringe and dangerous can get your content canned, and unfortunately for the article author, "How to cook a COVID-19 vaccine in your kitchen" is in that category.


I hardly trust the chicken I cook, I wouldn't trust anything biological I cooked up near me but it'd be nice to know the process in detail.


My question is how can platforms ensure only “good” science is done at the scale they operate at? So many pseudosciences hide behind jargon, how much scientific education is required to delineate between the two at the velocity people want? A company might take two weeks to review and approve a flagged post, but isn’t the damage already done for how these platforms operate?


While I understand this man's position, I also understand his opponents'.

If something has become crystal clear with COVID is that there are secondary effects or unintended consequences coming from biological experimentation that could affect our life greatly.

In the same way as Wuhan's lab relationship with bats and COVID, there is a relationship between a laboratory that was experimenting with chimpanzees and viruses 30 kms near the origin of AIDS.

Playing at home could pose existential risk to society at large. You will have the best of intentions,like probably the Wuhan people had,but the road to hell is paved with good intentions.


LOL, virus escaped from "Vector" BSL4 lab in Novosibirsk, Russia, 16.09.2019: https://www.youtube.com/watch?v=_w7SAeNcXA8 .

They were not played at home. They were not amateurs. But, as you can see at 1:04 (see soldier in background), first responders just cracked bio-laboratories and stole equipment after blast, because they are "checking building for fire" (поетажная проверка на загорание).


I’m not sure YouTube is the place for this content. Google does not seem to have any intention of providing ways to verify your credentials or providing references/facts. It’s clear YouTube is not the place people should be relying on to share this type of information or really share anything knowledge based.

I would love to see almost a Wikipedia version of YouTube, where videos could be submitted by anyone and facts/references could be linked at any point of the video.


Isaacsons new book Code Breakers about CRISPR has a couple positive chapters on Zaynor. (Isaacson interviewed Steve Jobs in his final months for the definitive biography.)


I think that’s a typo and I’m after Josiah Zayner, the author of the the article here?

Thanks for the connection, that book looks really interesting.


I'm baffled why anyone would believe that bioterrorism isn't a possible threat.

I'm also baffled why anyone would fail to understand that the practice of medicine is protected. And that vaccines require mass testing before certification and public release. Because every genome is different and it's not just about the fact that you rolled the dice on a new treatment and personally got lucky.

And that people who don't have PhDs - and many who do - can easily fuck up something as powerful as gene editing in countless ways, with fatal or life changing consequences for themselves and others.

Given the very poor reliability of software hacking, "biohacking" is insane, by definition.

When you have a population taking horse dewormer to treat Covid, handing out biohacking tools is like taping razor blades to the limbs of people who haven't learned how to walk without falling over just because some of them want to try something a bit different.

This isn't about "censorship" or victimisation, it's about common sense and the precautionary principle.


I guess it depends whether you think we should let adults to risky things.

Most libertarians believe that we should let people do what they like provided they don't harm others, which includes experimenting on their bodies to change their gut flora or genes.

I say let people have their freedom, even if they use it in risky ways you find stupid, whether that's skydiving or anal sex or changing their genes.


>> monopoly speech platforms are also sloppily banning any form of science that doesn’t come from ‘ordained’ sources.

It's not just that. The real message is that individuals are not able to make decisions or do things for themselves. This is why any medical treatment that doesn't involve surgery or prescriptions is thrown in with homeopathy and general quackery. It's not just companies either, individuals will look at you funny if you do anything not sanctioned by some perceived source of truth.


Nobody has to use YouTube. There are many other video hosting/watching platforms. None of them are especially popular because they get filled with a lot of junk, because they attract the people who can't host their content on YouTube.

In a way, this dynamic is an effective, natural check on moderation. If moderation primarily removes stuff that most people think is junk, then alternatives competing on reduced moderation will be junk. If moderation removes valuable stuff that lots of people want, then other platforms that host it instead will receive more traffic and legitimacy.

It's kind of like the Reddit vs. Voat thing. Reddit isn't great, but Voat ended up a cesspool, because Reddit's biggest exoduses were driven by hate groups.


Nonsense. A monopoly does not conform to the rules a "normal" company does. Rumble does not compete with YouTube, because a generation of uploaders have put essentially every piece of TV, music, movies, conference proceedings, and every other form of media on YouTube.

If you're wondering if something unusual is online, you would always look on YouTube first. And then stay there.


I disagree. You’re confusing network effects with monopoly. YouTube is not a monopoly.


I always think these comments are super interesting because they rarely list what they are talking about, instead railing against opaque system-words like "ordained" or "established", and countering with innocent or positive words like "medical treatment".

This causes me to never be able to tell if I'm reading genuine frustration with the process of science or indeed just general quackery.

Maybe it would be good to list specifically what frustrations you have with what medical treatments and why. At this point I can only be sure it's not homeopathy.

But, you know, of course it's not homeopathy...

...energy stones though, that shit is legit.


My body my choice.

No, you're not qualified to make that choice.


> I'm hesitant to say it worked because vaccines are complicated and we’d need further testing to confirm our results.

That's the line that makes me interested enough to read more and think about what the author is saying. Misinformation doesn't typically come with a disclaimer saying it might be wrong, so that tells me the author isn't being intentionally deceiving or spreading information where they have no clue what's happening.

Reading more, it actually piques my curiosity a bit. It reminds me of board level electronics repair before I saw Louis Rossmann's YouTube channel. I always thought it was impossible, but then you watch one video of him doing it and realize all the rhetoric about it being difficult, impossible, and dangerous is just that; rhetoric.

I don't know what the answer is in terms of dealing with misinformation, but one thing I believe fairly strongly is that letting large, private institutions decide what's fact and what's fiction is problematic. The "facts" will always align with their business interests.

I'd even say it's risky to let big companies like manufacturers get away with their lies. I KNOW electronic and appliance manufacturers are lying about the complexity of their products because I can watch a YouTube video and fix a lot of what they claim is unfixable or too dangerous to fix.

Considering that, WTF am I supposed to believe when I read an article like this and someone is saying that biotech isn't as complicated as the profit driven institutions claim? There's got to be some truth to it if they behave the same as the electronics industry, right? Or am I sliding into anti-vaxxer territory?

No matter what, I think it would be a great step forward if we stopped accepting deceit and disinformation as being a normal thing for large corporations and institutions to engage in. We have so many provable examples in the electronic repair industry alone that I think it's endemic. Maybe if the big companies didn't lie so much, there would be less distrust and less of an opportunity for people to spread misinformation and propaganda.


I agree that's an important bar, but intentional misinformation purveyors will learn to use that language too if necessary. It's necessary but not sufficient.

Note: I dont' think this guy should have been removed from youtube, and have no reason to doubt the veracity of his story, it seems plausible and interesting.


Can we start with the fact that not being allowed onto a company's video platform is not a criminal penalty?


What's the best video platform that supports open standards and censorship resistance?

Is self-hosting the only option?


I do sympathize with him, and I think his specific content should be available on YouTube, but I can also see why YouTube would want to ban him. If I'm YouTube, and I don't really understand anything about biohacking, do I really want to take the risk of promoting material about making your own vaccines? Sure, this guy's content is informative and valuable, but it wouldn't be long before you have someone less competent promoting a self-made vaccine that kills someone. If YouTube doesn't feel it has the expertise to distinguish the two, I can see why it would be prudent to just ban them all.


It would be good if YouTube weren't held responsible for the content they host (by both media and regulators). Its not fair to blame the transmitter for the content.


Well, but even beyond regulators and media, if I'm a parent, do I want to let my kid watch videos on a site where they might click over to a how-to on a homemade medicine that'll kill them? And if I'm the guy running YouTube, will I sleep well at night if users of my site die after trying something that they saw on my site? I think even if I were just running YouTube in my basement and had zero media or regulator attention, I'd want to avoid that just for my own conscience.


Would you also feel responsible as a cloud storage provider? An ISP? A phone manufacturer? An electricity provider?


people need to trust science because the average person doesn't have the scientific background necessary, much less the time, to evaluate the accuracy of everything scientific they deal with - especially in the realm of medicine

like it's great you want to make things accessible and help people take charge of more things in life, but I don't think most people want that - and even of those who do say they want that, few will still take the time necessary to actually understand stuff correctly

which leads to the related issue that caused all the regulations and regulators we have today, people falling prey to snake oil salesman and downright harmful products in the marketplace. just look at the state of "nutritional supplaments" today

to me, I think people should just be more aware of the limitations of what they can really learn given the constraints of day to day life and accept that specialization is necessary to making modern society function - thus putting their trust in the appropriate authorities when necessary

but even that is a idealistic goal so idk


> people need to trust science because the average person doesn't have the scientific background necessary, much less the time, to evaluate the accuracy of everything scientific they deal with - especially in the realm of medicine

People need to be given the information they need to make informed decisions on topics that impact them. I don't have the time or inclination to figure out if the levels of heavy metal exposure for welders in the third world are dangerous or not because it doesn't impact me. I have plenty of time to figure out if any chronic disease I get has promising treatments that haven't been approved yet. For the doctor it's a matter of a 9 to 5 job, for me it's a lot more urgent.

I just look at sci-hub and the citation web of any papers I find there.

You're saying that the solution to having an ignorant population is to make them even more ignorant for their own good. That is the type of thinking that got us the dark ages.


> people falling prey to snake oil salesman and downright harmful products in the marketplace.

Just have institutions enforce labelling rather than banning.

If COVID vids had big banners put underneath them saying "disputed"/"unproven"/"speculative"/"known false"/"likely false" I'd be happy (as long as it includes a link to what standards they are using). Similarly, the FDA should be able to enforce accurate labelling stating what level of verification a treatment has had, not be able to ban it.

If people still want to buy the snake oil (maybe they like how it feels on their skin?) let them!


The line between genius and crackpot is often very blurry. It seems most of his problems are from "guilt by association" and the rest are from doing serious work in very unconventional ways.


Youtube are a private company and as such can publish or not who or whatever they want. They arent obligated to publish anything. There are video platforms that however will, so he can use those instead. But I think the complaint in not rooted in Youtubes behaviour as much as it may be in the authors desire to be seen by as many people as possible. The ability to seek attention is much diminished on other platforms.


Why are people still repeating that private company nonsense? Your same argument was used against civil rights back when private companies were allowed to discriminate based on race. It's an appeal to the existing law and implies you believe the law is correct by default - whatever it might be - and needs no justification.


So facebook and twitter should unban Trump?


Yes?


> In general, I think I understand where the criticism is coming from. Every time I post on social media about being deplatformed, banned, or silenced, someone chimes in with their own story about being banned because “big government is trying to suppress the fact that echinacea cures Covid” or whatever. Spoiler, echinacea doesn’t cure Covid, but this is the kind of crazy nonsense my work is compared to. Are you a credential person? Great, I’m a scientist with a PhD from one of the top universities in the world. I’ve worked at NASA. I’ve published a number of papers.

I couldn't care less for this shit. "Yes I understand the necessity of censorship. But not me. I've worked at NASA. I've published 'a number of papers.'" Lol.

> The problem isn’t my thoroughly detailed research, which I would love to have critiqued in good faith. The problem is big tech companies making billions of dollars aren’t capable of doing basic analysis of scientific work, or hiring a team that can, which is why the best they’re capable of on the pandemic front, for example, is attaching a link to the CDC website on every post that mentions “Covid” or “vaccine.”

Indeed. They are not capable of doing basic analysis of every video, or hiring a team that can. Billions of dollars aren't enough to hire a team to analyze every video on youtube.


>I couldn't care less for this shit. "Yes I understand the necessity of censorship. But not me. I've worked at NASA. I've published 'a number of papers.'" Lol.

There are situations where you have someone who truly has deep merit meeting someone who believes they do and who and tries to relate to the former, but is so far disconnected that they have categorically different experience which immediately disqualifies the latter, but the general public would be unable to recognize, yet it would take significant effort for the former to explain the situation to an extent the public could understand, and at risk to the emotions and ego of the latter. Surely you can relate to this situation to some extent?


Unfortunately, the "echinacea cures Covid" claim probably isn't true, but the CDC and YouTube people seem to have decided that ANY claims about therapeutics must be suppressed. There isn't any objective reason for this; it's purely political side-taking.

As for your research: are you saying there are no forums where your work can be "critiqued in good faith"? Or just that YouTube isn't that forum? Because you're right -- it isn't.


I wonder if the video had prominent "Don't try this at home" warnings?


The whole point of his video and work is to democratize science. He sells kits to let you do genetics science at home. Ie, a disclaimer would not make sense considering his objective


you cant even ask a question these days without being canceled.

This is probably the most unscientific timeline we are living in now where everything is politicized and questioning the "experts" is considered sacrilege.

Even Nicki Minaj is being harassed for doubting the party line https://www.foxbusiness.com/technology/nicki-minaj-says-shes...


During the American Golden Era people were free to take whatever position they wanted separately for each individual issue. In 2021, if someone takes position X on an issue, one is expected to take position X on all issues, or suffer social consequences for dissent.

> Using a broad set of issues from the American National Election Studies, we identify rapid growth in the correlations between political attitudes from 2004 to 2016. This emergence of issue alignment is most pronounced within the economic and civil rights domains, challenging the notion that current “culture wars” are grounded in moral issues.

https://www.sciencedirect.com/science/article/abs/pii/S00490...


I wonder if there isn't a market for a COVID vaccine that the inevitable variants aren't resistant to.

If you kept it to a small group of customers, it would maintain it's value and be worth $$$$ to them potentially.


AFAIK, variants are not "resistant" to vaccines, they are just different, so a vaccine targeting the previous strand will be less effective because it doesn't match perfectly.

There is no better vaccine than the one for the current dominant variants. You can't really target future mutations because you don't know what the future mutations will be. You can imagine restricting the best, most current vaccine to the rich to limit escape mutations but besides the obvious ethical concerns, the virus will mutate anyways. Better vaccinate everyone with the best, because the less people infected, the less chances you give the virus to mutate.


Current vaccines aren't optimized for the currently dominant Covid strains, they're optimized for the Covid strains that were around a year and a half ago.


The problem, IMO, with the current vaccination system is not that lots of people aren't getting their shots with the available vaccines (although that's sad); the problem is how many hoops have to be jumped through to get the vaccines on the market at all. The GOP could have legitimately criticized Big Government on this if they hadn't swallowed all the stupid. Freedom doesn't mean just the freedom to decline a vaccine; it means the freedom to act without the government's permission.


That's certainly the future.

Mass vaccination might have made it better for costs on tptb in the past. But, if that doesn't work because a sufficient amount of the population refuses to cooperate...


> ... monopoly speech platforms are also sloppily banning any form of science that doesn’t come from ‘ordained’ sources.

> ...

> I’m sorry, it’s complete bullshit. Science is not something to be believed or trusted. Trust is antithetical to science. Show me the data, and let me decide for myself.

Cool, agreed, but, these arguments sound identical to those made by anti-vaxxers who believe vaccines cause autism.


Biohacking in the sense of making E. coli grow on a different substrate, like in the ODIN kit, is pretty tame and unobjectionable. Promoting the manufacture and administration of homemade vaccines, though, is human subjects research, or _very_ close to it. The current manner in which society handles human subjects research came about through reaction to specific offenses, the main ones being Nazi experiments, government studies such as the Tuskegee Syphilis Study, and common experimentation by clinicians on their patients. In reaction, definitions for human subjects research and principles to protect human subjects were established in the Nuremberg Code, Helsinki Declaration, and the Belmont Report. These principles exist not only to protect subjects but to avoid even the appearance of wrongdoing.

Promoting experimental (biohacked) treatments and guiding people through administering them is not compatible with the currently accepted ethical framework for protection of human subjects, _especially_ if it's done with intent of generating scientific knowledge.

That, I think, is a large factor in why government and corporate entities do not wish do be seen as supporting, even tacitly, biohacking on humans. If you do biohacking on yourself quietly, no one will really care, but publishing what you learned makes it scientific research, and if it was done to a human it's human subjects research. So yes, the root of the conflict probably is a lack of acceptance by the "official science cult", but this conflict does not seem arbitrary or frivolous. Extension of current approaches to human subjects research to accommodate a public DIY biology / biohacking community may be possible but seems tricky to do without accidentally deregulating the whole field.


> Promoting experimental (biohacked) treatments and guiding people through administering them is not compatible with the currently accepted ethical framework for protection of human subjects, _especially_ if it's done with intent of generating scientific knowledge.

> Extension of current approaches to human subjects research to accommodate a public DIY biology / biohacking community may be possible but seems tricky to do without accidentally deregulating the whole field.

This is incorrect. The ethical framework and laws surrounding human subjects research pertain to institutions, not to individuals. In the US, there is no human subjects research board for unaffiliated scientists/hobbyists that could approve or deny their research based on ethical considerations, as there is for institutions. The Belmont Report that established IRBs was specifically motivated by institutional abuse of the kind you mention, not general quackery. Marketing of quackery falls under the purview of the US FDA. There is no regulator for independent science. If you are independent and including others in your research, you are only bound by general medical ethics.

See https://www.weber.edu/IRB/irb_history.html


> If you are independent and including others in your research, you are only bound by general medical ethics.

The National Research Act and the Belmont report were created in opposition to this attitude. Henry Beecher led the fight (in the 1960s, preceeding the NRA and Belmont report) to make including others in your research be subject to research ethics, not just medical ethics, and to enforce this with regulation. The regulation was focused on institutions because at the time research was done by institutions. Now, post-internet and post-genetics revolution, we're retreading the same ground but with different details. A different approach is therefore warranted, but a free-for-all is just as unacceptable as it was in the early 20th century. I understand that this is unwelcome and is certainly inconvenient, but institutional science succeeded in regulating itself, and biohacking can do the same.


> A different approach is therefore warranted

What is the different approach?

1. Autonomy (informed consent), 2. beneficence, 3. non-maleficence ("do no harm"), 4. justice (distribution of treatment where it is limited) are the principles that bind doctors who are operating on patients. What principles would you expect an independent researcher to uphold beyond what a doctor currently upholds? All other factors considered, as long as these principles are upheld, a doctor is free to try a new surgery for the purpose of exploring its effectiveness, i.e. engage in research. This is not new; private doctors have been testing procedures of varying risk without an IRB since the time of these commissions. Any researcher who violates these principles--where there is a harm to a research subject--is almost certainly liable under criminal fraud and torts laws. It is not a "free-for-all", as you characterize it.


Yes, the fact that current regulations apply only to institutions is the point! When (unregulated) individuals function like scientific institutions, conflict results, because the current system was not designed with this in mind.

The ethical framework does not apply only to institutions. Ethics are for everyone.

Edit: If the prevailing attitude is that doing human subjects research as an individual ("biohacker") means research ethics can be ignored, because current regulations cannot punish you, it makes total sense that Paypal etc. would simply run away from it. Keep in mind that the article frames the biohacking effort as an equal alternative to institutional science. If it is to be seen as a legitimate and equal scientific enterprise, it needs to develop equivalent ethical governance.


At the core of YouTube's censorship is the idea that people aren't intelligent or wise enough to know what kinds of videos they should watch. They don't know what's good for them.

Whatever you may think of that idea, surely the idea that YouTube, which reduces down to a bunch of people sitting behind desks, should get to decide what others watch is questionable. That is, they think a small group of people are wiser and more intelligent than everyone and should get to curate and censor content for the majority at will.


> At the core of YouTube's censorship is the idea that people aren't intelligent or wise enough to know what kinds of videos they should watch. They don't know what's good for them.

I'm not sure I'd agree. I think that's putting far too much faith in them.

YouTube's goal, roughly stated, is "More Hours Watched." This, in whatever form is appropriate, is pretty much the end goal of any social media sharing handwave etc platform. More eyeball-hours to sell ads to.

For a while, YouTube's algorithms (which I think are almost certainly too dumb to have any idea what a video is about) pushed conspiracy theory content for the simple reason that if you can get someone watching conspiracy theory content on YouTube, they're very likely to continue watching conspiracy theory content on YouTube. Of the people who watch video 1234, 30% of them then watch dramatically more YouTube afterwards, so the more people you can show video 1234 to, the more hours will be watched - think "paperclip maximizer," not "Muahaha, we will drive people down conspiracy rabbit holes!"

Of course, do this long enough, and eventually you have a problem - bad press coverage about how you're driving people down conspiracy rabbit holes. Whoops. But the problem here, from YouTube's perspective, isn't that you're driving people down conspiracy theory rabbit holes - it's the standard tech industry problem that you're getting bad press for it. Bad press is bad for hours viewed. So you fix the problem, and issue the standard tech industry appy polly loggy - "We are so, so sorry you caught us doing this and we will do the work to ensure that you don't catch us doing it in the future."

I don't think YouTube particularly cares about Covid misinformation or [whatever]. They care about the bad press from being seen hosting it, which might impact people's opinion of them and reduce hours watched.

To claim that the algorithms understand anything beyond the title or such is to claim a capability that has no evidence for existing.


> Bad press is bad for hours viewed

I don't think this is true really. People aren't going to stop watching YouTube because it pulls others down conspiracy rabbit holes.

I think the real reason is that bad press is unhappy employees and less potential recruits.


It's fairly clear from a century of (a) marketing (b) psychology that people aren't really even remotely close to as intelligent or wise as they like to think they are, and almost certainly do not know what's good for them (let alone actually managing to act upon it). Humans are almost trivially manipulable (ask any advertising agency) thanks to a set of cognitive biases (and even defects) that are easily exploited (intentionally or otherwise) by all kinds of information and presentations.

You don't have to think that you're wiser or more intelligent than other people to recognize this, and to understand the danger posed by certain kinds of presentations (which includes ads, something I wish was easier to regulate without violating free speech)


> surely the idea that YouTube, which reduces down to a bunch of people sitting behind desks, should get to decide what others watch is questionable

That's been the value-add of YouTube since the day they implemented preference-learning and recommendations, and it continues to be one of their distinguishing factors in the marketplace of alternatives.


You don't have a right to service from a private company. Why are people so surprised and or angered by this?

The law is clear: you can be denied service as long as the reason you are being denied is not your sex, religion, etc. [Protected classes].

If you go to a restaurant and they don't like you they can throw you out, and so can a hardware store, and a movie theater, etc. (Exempting the protected reasons).

Find another service.

This makes sense. Why should a company be forced to allow people on their property? Or use their property? They shouldn't be forced, as long as they're not discriminating.


Whenever people make "the new decentralized YouTube" or whatever, everyone cuts them down and say "nobody wants that, they just want an easy centralized service with everything". At some point we have to accept that's what we've gotten in nearly every field: 2-3 monolite providers that control the market.

If people want that, then fine, but they can't then be regulated as if we had a vibrant decentralized market. We need democratic control and certain guarantees rights.


> Why are people so surprised and or angered by this?

People have self respect and don’t like being bossed around by nitwits and parasites in mega corporations.


People are surprised and or angered by this because they aren't as clear on the law as you are. It's understandable, and they need help to better understand how it works.

With that in mind, got any other services that would be replacements?


Why do people always ask "What other privately owned service I don't have control over can I use?" instead of asking "How can I make my own platform that I have control over?".

Peertube, off the top of my head, would be the exact tool for the job.


Most interested in open source, decentralized, self hosted. Thread here: https://news.ycombinator.com/item?id=28558599

Come join!


Note that even if there are no replacements, a private business can still refuse to provide you with service. It could be the only restaurant in town, and the next town might be 85 miles away and you might not have a car. Under US law, that's your problem, not the restaurant's.


You're right and I posted in frustration :(

Other services... Well I think you can still view YouTube without an account.

As for publishing, Vimeo?


It happens to me as well, appreciate your thoughtfulness. Vimeo is still the same deal - same weaknesses as YouTube, just less popular.


Those rules were written when you could find another service. Nowadays communication platforms are too centralised to be allowed to remove content at will.

The other issue here is the person was banned for life from using YouTube, which is a far cry beyond having some of his content removed for not fitting the platform.

With great power comes great responsibility. The banhammer is being wielded too sloppily and that is hurting people in real ways, because parts of the web are now of central importance to daily life. Private companies in other sectors that can similarly impact people's lives are regulated in what they can do to their customers.


Wow this guy is ignorant. Not everyone who is intelligent is a good person. In fact, some people just like to see the world burn. It might only be 0.5% of the population, but I totally get why he's investigated all the time. Selling a kit that allows bacteria to grow where they normally don't just seems like an obviously bad idea to me, even if his science ambitions are good and noble. One can learn a lot about chemistry by synthesizing explosives, but selling a diy detonation kit (even for research and learning) still seems wrong. And the potential to accidentally harm others, for example but getting genetically modified organisms on your skin and then touching stuff seems even higher here.


I spoke with an inventor, author of many books, and semi-famous Silicon Valley millionaire— whom I respect deeply— who told me that as a teenager he toured a Wunder Bread factory and saw how they used high heat to prevent mold or something. He decided to breed a temperature tolerant mold in his home and then see if he could spread it at the factory on another tour.

That’s curiosity and science, too.

Fortunately, he lost interest after having successfully bred the mold, and never carried the attack out. But you know what, I don’t want kids and sociopaths having easy access to biotech at home. Who knows what hacks will ensue that could devastate crops or economies. The OP says that is a small likelihood. The truth is he has no model that allows rational computation of the risk. The truth is he has faith.

The author of the post understands the practice of science but not the sociology of trust that goes with the community of scientists. I don’t care how safe he declares his ideas to be, it’s not in societies interest to have “Joe’s homemade injectable health serum. You can trust ol’ NASA Joe” available at every roadside stand.


Anybody (even a teenager) could do that back then, and anybody (even a teenager) could do it today. Yet, they manifestly haven't. Evidently, it is not very easy to go from a fun experiment to global domination, gene recombination or no gene recombination.

Meanwhile, ranchers are actively breeding antibiotic-resistant bacteria, everywhere, with outstanding success, and sending the resulting strains straight to your local supermarket. Thousands die infected by those strains. Maybe that is a better place to devote attention than a mail-order science kit seller.


Apparently you don’t admit to the risk, but your words are not an argument against doing reasonable things to minimize potential damage from morally stunted hackers.

We can disagree about where to draw the line. I think it’s obvious that genetic engineering should be strongly regulated. I think that is also obvious to you.

We agree that antibiotics are overused.


> t’s not in societies interest to have “Joe’s homemade injectable health serum. You can trust ol’ NASA Joe” available at every roadside stand.

Luckily we're already protected from misleading and dangerous medicine with existing law. You're falsely pretending he's doing something worse than he really is. If you're so confident in your belief, why not just use truth instead of exaggeration?


I am not pretending anything. I have made no representation of fact about him other than to state a principle based on a logical extrapolation of his behavior.

I guess the laws you refer to are helping, because according to the story he’s been investigated for practicing medicine without a license. But why not do everything we can, as a society, to protect ourselves from irresponsible people?


Because it limits the freedom of those "irresponsible" people. We could ban every activity that has the slightest chance of harm but then none of us could do anything. No computer tinkering allowed - might create a virus! No chemistry allowed - might create drugs! No dogs allowed - they might bite someone.


It’s very easy to argue with you, because your main tactic is hyperbole. Let’s see, I can do that trick, too. How about this: Oh you think any limits to freedom are intolerable? Then let’s go back to post-Roman Europe. No enforceable law at all! No such thing as civic responsibility!

I can see why you do that. It takes so little mental effort to participate in a discussion and still seem to say something.

I think we need reasonable regulatory limits on a lot of things, such as dumping toxic chemicals in a hole in your back yard, or abusing your children. But not everything; and not infinitely. Engineering mutants and “medicines” at home is generally too risky.


Then don't make ridiculous statements like "But why not do everything we can, as a society, to protect ourselves from irresponsible people?" Of course you know the answer to that question is "because those protections cause problems themselves". It adds nothing to anybody's understanding and is a dirty trick to fool people without making any effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: