Hacker News new | comments | show | ask | jobs | submit login
Show HN: A decentralized index of banned users and where to find their content (beforetheban.com)
74 points by Mattasher 71 days ago | hide | past | web | favorite | 100 comments



This is especially relevant considering YouTube killed Terry’s videos. (Losetheos, templeos)

He had literally hundreds. For reference, pewdiepie has around 2800, I think.

I believe you can still access them through archive.org. I’ve thought it might be worthwhile creating a new YouTube account for terry, reupping all his videos, then give him the keys. But I bet that’d run afoul of YouTube’s tos.


[flagged]


He is not an "insane person". He is a good programmer who suffers from schizophrenia.


Many insane people do not go on racist tirades.

prions 71 days ago [flagged]

That doesn't excuse his fans/followers encouraging and condoning his racist tirades


Not wanting his hundreds of programming videos to be whisked down the memory hole isn't condoning or encouraging racist tirades.


So you're suggesting that archive.org should tell a half truth about the man?


How is someone personal life relevant to their work legacy? We all know what he suffers from, even his official website explains the situation. We do not need to archive every single symptom of it. The man deserves some privacy, especially when it's about a medical condition.


oh, I was talking about the racism.


Are you proposing we shouldn't be kind to people with mental issues?


I am not, but I'm also not suggesting they need the services of the internet archive.


This is a very timely service, given the current climate in which de-platforming is poised to accelerate.


Anger and Rightousness are powerful human emotions. Especially towards people deemed to be 'different' than your in-group.

It basically is a fundamental part of our psyche. It's never going away.


It basically is a fundamental part of our psyche. It's never going away.

What a piss-poor cop out argument this is. There are lots of impulsive, animalistic features still baked into our slightly more evolved reptilian brains, and it seems we haven't had much problem ironing out some of the worst of those impulses for net positive outcomes.

Clubbing people from different tribes over the head with tree branches, for example-being one of them.

Why, pray tell-is/should this be any different?


Because labelling emotions toxic and unacceptable makes them come out in other ways. Try as they might, the Victorians could not get rid of 'licentiousness'.


People still murder each other fairly commonly. I imagine that's the sense in which similar bad human behaviors should be expected to persist indefinitely.


Sorry. I'm just going by thousands of years of history.


I’m crossing my fingers the hate machine will fizzle itself out. People have to get bored with outrage eventually, right?


I thought this a while ago, but there are some people whose entire online identity revolves around sharing and propagating outrage. Which is to say I think that even if the things people get outraged about were to reduce, I think that people would in turn get outraged about smaller things.


Isn’t that what’s already happened? We went from civil rights movements for blacks (>10%) to gays (<5%) to transgender (<1%).


Just because - in fact especially because - someone is in the numerical minority doesn't make it right to abuse them.


Sure, but it limits the amount of impact your movement can have. If you save black Jewish vegan disabled lesbians from oppression that’s admirable but ultimately only affects 10 people.


>Which is to say I think that even if the things people get outraged about were to reduce, I think that people would in turn get outraged about smaller things.

I think the cases for which that's true don't tend to correlate with the sort of content we're discussing. I don't think "outrage culture" is the reason people object to Alex Jones, as an example.


But crazy people like Alex Jones have always been around (not that I think he's crazy, only evilly making a mint of money taking advantage of the crazy and gullible). Outrage culture on the other hand is a fairly new phenomena associated with social media, and thrives due to socially reinforcing group building feigned outrage. If people who didn't like Alex Jones, simply ignored him, he'd sink back into the shadows and have a tiny fraction of his audience. But he's a handy foil for the outrage culture to get their outrage on.


>If people who didn't like Alex Jones, simply ignored him, he'd sink back into the shadows and have a tiny fraction of his audience.

He wouldn't necessarily, because the people who don't like him aren't the ones making him popular. And his message has real-world consequences which sometimes can't be simply ignored. Harassing the victims of mass shootings is going to become a thing now because he's convinced people that all of them are staged and the victims are crisis actors.

>But he's a handy foil for the outrage culture to get their outrage on.

We may simply be disagreeing on what constitutes outrage culture. To me, it's outrage for its own sake, outrage as identity politics and virtue signaling ... and there is certainly a lot of that on social media.

But people also have legitimate and understandable reasons to be outraged at Alex Jones and people like him, which to me makes it no longer outrage culture, but just outrage.


"...because the people who don't like him aren't the ones making him popular."

He claims he's received over 5 million new subscriptions to his paid service since he was kicked off multiple social media sites. And even if he's lying about that, the Streisand effect is real.

"But people also have legitimate and understandable reasons..."

This is a guy who says lizard people from Mars are deeply involved in the ongoings of our politics. Being outraged that a (pretend) crazy person says something crazy seems like an incredibly useless and unproductive emotional response.


Someone with an assault rifle showed up at a pizzeria because Alex Jones pushed Pizzagate. Sandy Hook victims are being harassed because Alex Jones convinced people they were just actors. He believes Jews did 9/11 and are secretly false-flagging the alt-right and white supremacist movements.

If the only people he demonized were "lizard people from Mars," then yes, it would be silly to take him seriously, because there aren't any actual lizard people from Mars who could be victimized by that, but that isn't the case.


Alex Jones is outrage culture, surely? His entire shtick was manufactured outrage about the government, "leftists" etc.


His outrage may have been manufactured, but his listeners and detractors take him seriously, so I don't believe the reaction to his content counts as outrage culture.


Are you suggesting that the left doesn't take their outrage seriously?


I... never even mentioned "the left." Or the right.


No, you did not explicitly mention it, but since you're stating that right-wing rhetoric (alex jones) "doesn't count" as outrage culture since the fans and critics take him seriously, there seems to be an unspoken implication that "actual" outrage culture (most often attributed to the left) does not similarly fall under the category of being taken seriously.

I didn't want to make assumptions though so I asked.


I'm stating that the reaction to Alex Jones is what doesn't count as outrage culture, specifically because people take him seriously. You could find the same phenomena at work on the other side of the spectrum but I'm lazy and CBA to equivocate.

But my thesis is literally that once people take it seriously, it no longer becomes outrage culture, which is a politically neutral statement. Notwithstanding the implicit politically biased undertones which make "unspoken implications" difficult to avoid.


Ah. Looks like this is simply a misunderstanding. Your reply to pjc50 makes it seem as though you are suggesting that the rhetoric purveyed by Alex Jones does not count as "outrage culture" because it is taken seriously by critics and fans. Perhaps you were even misunderstanding his point in your own reply.


I didn't even think my comments were controversial, but most of them have been voted down at least once, so clearly there is a miscommunication going on.


> Given a diverse enough population, your beliefs are almost certain to already be offensive to some sub group

That is some top notch rationalization of hateful and racist speech and such a reductionist view of the issue. Let’s maybe take a step back and consider the reasons -why- someone may find a belief offensive rather than making some blanket statement about the issue. Let’s also consider that being offended by racism is not the same thing as being offended by someone calling you out for being racist.


The quote doesn't seem even slightly remarkably contentious or incendiary, but you've somehow managed to make some sort of value statement about what was said and immediately made the association that the writer is rationalizing hatred for stating a pretty tame quantitative opinion.

You've also described the quote as a 'reductionist view of the issue'. What issue? The decentralized archiving of content that some in a specific group of a certain segment of online denizens have deemed objectionable? Isn't this just proving exactly the point of the quote you're taking issue with? Where is the rationalization?

How did you come to this conclusion? I'm curious what that logic ladder may look like.


In the context of the article and other footnotes, it seems like the implication is, “someone is always going to be offended therefore we can’t label some things legitimately offensive and others not”. It’s a form of linguistic nihilism that comes up a lot during discussions about how “PC” things have become.

The racism comment is an example and not necessarily directly targeted at the article, but more at what’s implied given the context (e.g. the comment about The NY Times article having the race changed from “white” to “black”).


It’s a form of linguistic nihilism that comes up a lot during discussions about how “PC” things have become.

And so your first response to engage this is to immediately play political-word-association and tie this service up with the worst elements of the "PC" divide, because of how some other groups, not even involved in the discussion right now, deploy bad-faith debate tactics?

That doesn't seem at all any more helpful than whatever it is you're portending to have an issue with, at the core of a service like this.


You do seem to agree that the severity & background behind offensiveness is complex and subjective. Banning is binary, and so it never maps fully accurately to such complexity, especially when similarly complex/subjective humans and weak AI systems are involved.

Obviously, the list of banned content will contain a complex spectrum of offensiveness, including some really blatant stuff but also some where the banning is very questionable or inconsistent with respect to that which was not banned. It is the latter which is most interesting to preserve and study, but cannot be mechanically separated from the former, citing the original problem of who to ban in the first place.


> You do seem to agree that the severity & background behind offensiveness is complex and subjective. Banning is binary, and so it never maps fully accurately to such complexity, especially when similarly complex/subjective humans and weak AI systems are involved.

I do agree with that. It’s not so much a commentary on my part about banning, but moreso about how the service is being presented given that sentence and the context around it.

People shouldn’t be banned for absolutely any reason, but that doesn’t mean banning shouldn’t ever happen because “people will always find something offensive” (which is what seems to be implied by that particular footnote - though I could be wrong). The total relativism about speech and offensiveness is the thing with which I take issue.


Nowhere in any of this is anybody saying "banning shouldn't ever happen". The only point presented is that banning is by nature a judgment call and the goalposts change over time.

The primary implication is that the corpus of banned content includes generally agreeably banned content, arguably banned content, and a record of historical shifts in bannability.


> Nowhere in any of this is anybody saying "banning shouldn't ever happen".

That's fair. I absolutely acknowledge that my emotional response is making me get ahead of myself (I have an axe to grind with linguistic and moral relativism in general) and my apologies for not crafting the most reasonable arguments, here.

And don't get me wrong, I don't think the service is at all a bad idea. I just think that it could have been presented without falling back on the "Someone will always find something offensive and hate your label" comment.


I completely understand. But I would posit that that comment is valid as a technically specific point with intent to put people's emotional responses into perspective.

I think it's obvious that everyone is offensive to some non-zero count of people. Even if that is challenged, there is strong precedence that your current words can be read as ruinously offensive under currently-unknown future social sensitivities, which could retroactively punish you (assuming a continued extrapolation of punishable offensiveness from the current state).

In that, the nature of "being offensive" in and of itself is specifically not a valid distinguisher as it's always true. It's only when the level of offensiveness reaches some agreeable flash point where society acts in agreement that something is "too offensive" (as opposed to simply a binary "offensive"). But since everybody is "offensive" to some non-zero amount as a binary predicate, that word by itself becomes purely selective enforcement with everybody guilty by default.

The statement of "Something is always offensive" is well founded and speaking to a real problem, but yes it is unfortunate that its wording is very similar to defensive, reactionary, relativistic dismissal. In practice, "being offensive" is not actually that which is railed against; it is "agreeably too offensive" (or simply "antagonistic" which is a separate concept) that is the actual pursued distinguisher, while the former phrase is overly broad to a fault and is used in that way to cause real damages.

--

Having said all that, I wholeheartedly agree that all of the above can come across as stuffy, pedantic, and disconnected; and that people obviously tend to use "offensive" to mean "agreeably too offensive". HOWEVER, all of this is critically important when dealing with situations that have high stakes real world implications, especially when specifically codified terms of service, policy, or legislation are involved where the "letter of the law" rules.


Here’s an example. I think it’s not necessarily a good idea to encourage children to take puberty blockers, if they do any kind of irreversible effects. (I don’t know whether it’s reversible or not.)

Is this hate speech? Some would say that it oppresses people who are just trying to raise their children how they see fit. It’s certainly ignorant, since I haven’t done the research to know how to feel. But that’s partly the point.

It’s a tricky issue. But I’ve seen people get banned for less.

A few decades ago, it wasn’t ok to say that gay people shouldn’t be shunned.


"Encourage" is a tremendously loaded word here, since you're making it sound like someone's trying to get them to try heroin, while a more neutral presentation would be "receive medical treatment prescribed by doctor in response to their psychological distress".

> oppresses people who are just trying to raise their children how they see fit.

I usually see this phrase in the context of people opposed to bans on smacking children...

(I don't think either counts as hate speech as written, BTW, but similar things phrased more vehemently in a different context might.)


It sounds like the sentence you're quoting is the one taking a step back, while you're reasoning from a very limited set of issues.


I'm wondering if the service will eventually take sides. After all they are still serving content that people got banned for, so what happens if the content is actually illegal or is deemed offensive by the service? Will they get banned from it as well? Or would that content be removed?


The goal of the project is to be as decentralized as possible, and to make it as uncensorable as blockchain or bittorrent.


"Illegal" is regime-specific. Discussing private property or opposing the dictator/emperor/king/president/pope in some countries has been punishable by death over the years.

"Deemed offensive" by the service gets us back to exactly the same situation with the major social platforms now, so it's legitimate concern.


Take sides or reveal bias?

If this gets traction it will be fascinating to see whose claims of systematic oppression are validated.


I expect that if this ever becomes a metric that is looked at, it'll be gamed by "victims" getting themselves banned on purpose, and then crying foul.


Thank you for making this!

I can't help but think the big platforms are shooting themselves in the foot by acting as censors and opening the door up to whole new categories of services (like this) that will ultimately replace them. Betting against decentralization seems like a long term losing strategy. Censoring content and promoting particular political viewpoints will accelerate the move away from centralization.


I tend to agree about the shooting themselves in the foot part, the more these platforms decide to remove objectionable content, the more it looks like they are endorsing all content that hasn't yet been removed. Banning also encourages more users to try and get opposing viewpoints banned, not just downvoted or ignored.


If this catches on, I could see it becoming a bastion for conservative voices who have found themselves at odds with Twitter and other social media platforms.


These aren't conservative voices, they're hate voices and should be banned. There's lots of legitimate conservative voices out there who are not being listened to. And, they're not banned either.


The rebelious streak in me has been seeking out and reading so-called “banned books”— books that were at one point banned by a community for some reason or another— since I was a teen. Many of these are classics that ran afoul of norms or the party line at some point, somewhere.

The problem isn’t that the voices of hate are being banned. The problem is that the mechanisms used to lower the volume on or silence hate speech can lower the volume or silence any other form or speech. This is why the US enshrined free speech as the first amendment of the constitution.

Technical means of defying censorship, like this one, help to preserve that right.


Are you under the impression that it's the government running Twitter or any other online forum? I really don't understand why people are having issues understanding that a private enterprise is free to censor as they see fit.


I don't think anyone has a problem understanding that these are private companies and that they are well within their rights to censor whatever they see fit.

What is concerning is that these particular companies are the main platforms used nowadays for conveying ideas and expressing thought, and there are no alternatives that offer the same level of reach. Sure, you can put your video up on another hosting platform. Nobody is going to find it like they would on YouTube.


Then why mention the 1st amendment?

In order to disseminate ideas widely authors have always been at the mercy of editors, publishers and distributors.

It's an impossible situation because if you force someone to publish something they find objectionable then you've stripped them of their own rights just so some arsehole can write shit about Jews/Muslims/Gays/Blacks/insert minority here.


I don't think this is a very charitable reading. Your parent didn't make any claims as to Twitter's legal rights.


> This is why the US enshrined free speech as the first amendment of the constitution.

You're right, it wasn't very charitable, I took it quite literally, and that means the 1st amendment has absolutely no bearing on preserving the right of one person when that is trampling on the publishers rights.

> Technical means of defying censorship, like this one, help to preserve that right.

I did miss the nuance there because I get angry that there appears to me an impression in the general populace that the 1st amendment applies everywhere and gives people carte blanche to say whatever they please without consequence.


I think the issue arises with how powerful these social media platforms are. If your Twitter/FB/etc accounts get shadow banned, you now essentially have no voice on the internet.


Some people (not the one typing this comment) are absolutely fine with that too, it seems.

"This is objectionable to me, therefore, no one should be allowed to consume or be exposed to it."

That's what bothers this particular commentor (me).


I don't use Facebook, and I only log into Twitter once every couple of months or more rarely than that. I don't use any other of the super-popular social networks (I guess I have a linkedin profile that I haven't logged into in 5 or more years.)

My own access habits are in practice similar to one who is banned or suspended regularly.

I have never felt like I essentially have no voice on the internet.

I understand that if some popular person with lots of followers gets banned then it seems like they would risk losing a lot of their audience that they may rely on for income etc etc etc, but that's hardly the reality of the average 10/100/1000-followers user, is it?

Just because the big social networks are really big doesn't mean that they are the only places you can find a way to have a voice on the web. I don't mean to diminish your feelings if you feel that way. If you love twitter or fb or whatever and you get kicked out, then I certainly get that would be hard. But there are other places on the web. Lots of other places. Maybe some alternative places let you have even more of a voice precisely because they're smaller and it takes less to cut through the noise.


They are certainly influential, but so are talk radio and the news media, should those platforms also be forced to broadcast content they disagree with? I don't see any differences between Facebook and any other private businesses that have built media platforms.


The problem is that private enterprise is becoming the centralized controller of both personal and "public square" communications, being a ubiquitously editorializing middle man of common case and important speech.

This is completely different than, and carries higher stakes than the role that telecoms, hosting companies, and publishers have played in the past.


'Free speech' isn't strictly a legal ideal.


It isn't, but for a good number of people it certainly does appear to be a more comforting approach to stop short of making any sort of ethical claims about the 'free speech' question because it means getting what they want: uncomfortable ideas punted into the closet.


Lead dev on the service here. Please have a look at the page, especially the comments about how unpopular views can change over time, and how the views you hold are certain to seem deplorable to someone out there (hn user nostrademons has a good quote about this which I used).

At core though, for me this is about protecting access to information and the importance of decentralized (uncensorable) identity, not about who private companies should or shouldn't allow on their platforms.


Matt, it sounds like you've rebuilt the graph I already have through Keybase (https://keybase.io/). Can I not just use that instead?


I love the Keybase project (I'm user @mattasher)! You guys are on my list of people to contact as we get closer to public rollout. Anyone at Keybase I should contact specifically?


in footnote 3 you write, 'Candice Owens replaced the word “white” with “black”' . She replaced "white" with "Jewish"


Good catch! I believe she did both replacements, in different tweets. I've updated the footnote to be more clear.


I can't believe this needs to be stated, but the objection that normal people have to censorship is that we don't want a small handful of corporations and celebrity bullies to be the ones who get to decide what is or isn't "legitimate voice", as you put it.


"normal people"?

This is a discussion that has some serious points on both sides. I don't believe throwing judgments on the people who choose a side that differs from your own will be helpful.


I think they meant "normal people" as in not a corporation or a celebrity


I tend to use regular people just to avoid this confusion :).


IIUC, in U.S. jurisprudence the term "natural person" serves that role.


This is slightly different. It's to differentiate between individual entity and legal entity.

But normal/regular person can be used as opposed to celebrity, or billionaire. Both Jeff Bezos and myself are natural persons. Only one is a normal/regular person :).


Censoring people only makes them yell louder. You drive people deeper into radicalization when you treat them like this.


Maybe this is true, but it seems like this claim needs some evidence.

Shaming people sometimes makes them rebel, and sometimes makes them reform.

Maybe you drive more people into radicalization than self-reflection when you shut them down. I'm not sure I've ever seen evidence of that either way, have you?

However, it seems almost axiomatic that if a person cannot be heard (as easily) then that person cannot recruit (as easily). I would guess that this is at least part of the purpose behind de-platforming tactics. Maybe. Maybe de-platformers just want to stop hearing from their opponents.

I'm wary of de-platforming as a universal tactic though I might be convinced that it is sometimes the right approach. At any rate, if it's effective, then people will keep doing it. And it seems to be pretty effective.


Disclaimer: I don't have emperical evidence for anything I'm about to say.

I think that someone isn't hateful because they're an inherently hateful person. People are a product of their environment. If they only communicate with hateful people, it shapes their worldview.

Everyone has flaws, but somehow hateful speech is that One Thing we cannot tolerate in a person. Bob might be a racist, but he's also a good carpenter, and there's some common ground there. If you just want to get hateful people out of your feed, then ban away. If you want to reduce the number of hateful people in the world, then treat them with respect and gradually show them a better way.


Seems like there is a difference between the racist who has a real, seething resentment but is quiet about it and doesn't act on it, and the person (sincerely held racist beliefs or not) who proselytizes hate, isn't there?

I'm not a good data point, because I have never been a target of that kind of resentment/anger/hatred/etc. At the same time, I can imagine the quiet racist is easier to tolerate than the inflammatory provocateur.

I probably agree with your solution in the long term. I just don't know who is supposed to do the tolerating and way-showing. Certainly I don't think that social media/tech/publishing platforms are required to tolerate viewpoints they think could hurt their business.

For what it's worth, I can tolerate hate speech. I'm not quite a free speech absolutist maybe, but pretty close. But it's easy to tolerate it when I'm rarely if ever a real target of it.

Edit: If you feel I am unfairly calling you out by asking if you have evidence for your previous claim about banning users for content causing radicalization, then I'm sorry. That's not my intent. I really am just curious if that's been studied. I've seen similar claims but not any evidence.


No: people radicalise by contact with other radical people, not by being isolated.


That's a strong statement. I can immediately think of one exception to your rule: the "Unabomber," Ted Kaczynski, who both held radical ideas and was notably isolated in a remote mountain cabin with no electronic communications.


He is clinically insane though, being something closer to a serial killer than a real terrorist.


'Hate voices' -- could you think of a more nebulous term? That surely wouldn't be abused to punish political enemies.


>There's lots of legitimate conservative voices out there who are not being listened to.

Like who?


Tom Fitton or JudicialWatch was shadowbanned at Twitter. You go ahead and find the “hate speech” posts.

Candace Owens, a black conservative woman was temporarily banned from twitter for making the same comments NYT’snew editor made but replaced “white” with “Jewish” and all of a sudden twitter decides those are racist. She did it prove a point and twitter played right along.

Those are two examples.


https://mediabiasfactcheck.com/judicial-watch/ those guys?

"Judicial Watch has made numerous false and unsubstantiated claims, with a “vast majority” of their lawsuits dismissed"


I said find the hate speech. Not does your biased website doesn't approve of them.

JW is one of the only groups out there actually winning lawsuits to disclose information on the 2016 election and FBI mishandling of basically everything.

Sorry, they're real lawyers, in real cases, using actual facts that stand up in a court of law. I can see why your favorite website there doesn't like that.

ALSO... FWIW... My local news paper is EXTREMELY BIASED, and your site lists them as Neutral.


Your source is itself notorious for not being what it claims to be. I would argue that any website that claims to be a fact checker (whether left or right politically) is automatically less trusted, since their entire raison d'etre is to claim the right to declare whether others are biased or not. The reality is that everyone is biased. Claiming to not be so means starting from a false premise.


Honestly, this talk of Twitter “shadowbanning” strikes me as full on conspiracy theory. “I’m shadowbanned! No one can see my tweets!” gets retweeted 10k times. I don’t even follow these people, and it shows up in my TL.

I think this is performance.


Um... You should probably look at how their "Quality Filter" is shaddowbanning without actually removing tweets.

The short version is unless you follow that person already, you won't see them organically in any feeds. They could #JonathonKoren and you won't see it because they are effectively hidden unless you already followed them.

What Twitter is doing is absolutely political censorship.

EDIT: There are tools to detect the "Quality Filter" there is a hidden cam interview with an engineer from Twitter explaining it. Hardly conspiracy theory. I don't know why it doesn't seem to be an issue for you, or anything about your anecdote of not seeing it.


This doesn’t sound like a shadow ban at all. It’s a term that actually means something.[0] This is just people whining that they don’t have a high enough quality score.

I’d take this a lot more seriously if this wasn’t being peddled by same folks that complain about “conservative purges” when Twitter bans a bunch of obvious bot accounts.

What makes this a conspiracy theory is the assumption of some grand covert political agenda. That’s really really hard to believe. Twitter is infamous for not enforcing their own TOS when it comes to hate speech or even calls for violence.

While I’ve never worked at Twitter, I’ve seen how fringe groups start spreading rumors of dark political agendas when it comes to these sorts of algorithms. It just doesn’t happen. It’s just spreading false outrage for clicks. It’s bullshit sold to people desperate for validation of their unpopular ideas.

[0] https://en.wikipedia.org/wiki/Shadow_banning


Also, I like the Indiana Jones logo where they couldn't even Do It Right enough to have the correct kind of hat.


Cool. All the Nazis in one handy list


Having read the comments here, I think it would be useful if we could see a realistic sample of the kind of content this would be hosting. My guess is that it would primarily end up being a repository of racial epithets, threats, doxxing, child pornography etc. I think the framing of this as being about opinions that are somewhere in the ballpark of the Overton window is maybe causing some commenters to present themselves more righteously than they might if they were mindful of what most actual banned content looks like.


Is it going to include people driven off the Internet by FOSTA/SESTA, for example, or is it just going to be the collection of alt-right types who are currently getting banned from major services?


If I'm understanding the intent correctly, there would be no authority capable of keeping any of those people off the service.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: