Hacker News new | comments | show | ask | jobs | submit login

The worst part of the information age is arbitration by unreasonable and impenetrable algorithms rather than humans with the capacity to make a judgement call when the rules clearly don't account for the situation at hand.

A transparent appeals process staffed by humans who can at least deliver a rationale, including what rule you broke, should be required by law. There's irreparable reputational damage associated with an algorithm libelously labeling something "extremist content" that isn't.




I think the bigger issue is that certain companies have near monopolies in their spaces to start with. For plebs like myself, youtube is really the only viable option I have to distribute video media if I hope to build an audience. The fact that you effectively can't mount an alternative to facebook, youtube, etc due to network effects is the larger disease, and this is one of many symptoms.


I think the only solution is a distributed and decentralized web.

Distributed hosting of static content is a sorta-solved problem. But curating, linking and discoverability (which require mutating content) is a lot harder due to the trust anchor problem.


Your suggestion exists and has a name: bitchute. I will paste here what they have for "About" at the end of their main page:

   BitChute is a peer to peer content sharing platform. 
   Our mission is to put people and free speech first. 
   It is free to join, create and upload your own content to share with others.
Feel free to read more about it in their FAQ. I really want to stop using YouTube and use this instead.

I hope they make it.

https://www.bitchute.com/


The big challenge with this is to solve the problem that almost everyone has a "one step too far" when it comes to what type of content we are willing to tolerate and/or what type of content we may get in trouble for hosting even if it is not intentional.

That makes it tricky to for solutions that "put people and free speech first" to succeed, because they've basically painted a giant target on themselves, and it easily makes even a lot of people that sympathise in principle worried about the bits and pieces that steps over their personal line.

Figuring out a reasonable solution to this, I think, will be essential to get more widespread adoptions of platforms like these.


I think the problem is the expectation of people that someone else do the filtering for them. I.e. "I don't want to see this kind of content" leads to "someone else should remove it from all the sites I visit". Which obviously leads to conflicting requirements once you have more than one person and those people disagree on what they want to see and don't want to see.

The only reasonable solution is to host everything, modulo requirements by law, and give users the tools to locally filter out content en masse.

In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.


The problem on these distributed platforms is not filtering what people see, but filtering what people host or allow to transit their network connections.

> In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.

But that's the thing: You don't skip it. You spread it to every user. They both have to deal with whether or not they are willing to host the material and whether or not it is even legal for them.

How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?

These platforms will always struggle with this unless they provide ways for people to feel secure that the content that is hosted on their machines is content they don't find too offensive, and/or that traffic that transit their networks is not content they find too offensive.

Consider e.g. darknet efforts like cjdns which are basically worthless because their solution to this was to require people find "neighbours" they can convince to let them connect. Which basically opens the door to campaigns to have groups you disapprove of disconnected by harassing their neighbours and their neighbours neighbours, just the same as you can go to network providers on the "open" internet.


First of all, not all p2p networks operate like Tor. For example bittorrent and ipfs only host content you look at. So hosters could largely self-select the content they replicate.

Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at. I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.

And even if you are on a network where you randomly host content you never looked at encryption can provide you some peace of mind (of the obliviousness kind) because you cannot possibly know or expected to know what content you're hosting. Add onion routing and the person who hosts something can't even be identified. If Viewer A requests something (blinded) through Relay B from Hoster C then B cannot know what they're forwarding and C cannot know what they're hosting. If neither you nor others can know what flows through or is stored on your node it would be difficult to mount pressure against anyone to disconnect.

For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.

----

Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.


> And even if you are on a network where you randomly host content you never looked at encryption can provide you some peace of mind ... If neither you nor others can know what flows through or is stored on your node it would be difficult to mount pressure against anyone to disconnect.

I think you misunderstand the objection. Yes, encryption can mean you cannot be persecuted for "hosting"/"transmitting" some objectionable stuff, since you can prove that you had no idea (at least that's the theory).

However some want to be able to "vote with their wallets" (well "vote with their bandwidth"). They don't want to assist in the transmission of some content, they want that content to be hard to find, and slow and unreliable. They have the right to freedom of association and don't want to associate with those groups. Encryption cannot guaranatee that I won't help transmit $CONTENT.


> First of all, not all p2p networks operate like Tor. For example bittorrent and ipfs only host content you look at. So hosters could largely self-select the content they replicate.

I'm aware of that, but they you suffer the problem of people wanting deniability.

> Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at. I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.

That's true, but those sets pretty much only need to be non-zero for it to threaten peoples willingness to use such a network.

Further, unless there is stuff in a), and stuff that fall into b) for other people that you want to look at, such a network has little value to most of us, even though we might recognise that it is good if such a network exists for the sake of others.

This creates very little incentive for most to actively support such systems unless such systems also deals with content that we are likely to worry about hosting/transmitting.

> For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.

That's an interesting thought. Turning the tables, and saying "just tell us what to block". That's the type of ideas that I think it is necessary to explore. It needs to be extremely trouble-free to run these types of things, because to most the tangible value of accessing censored content is small, and the intangible value of supporting liberty is too intangible for most.

> Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.

This, on the other hand, I fear is a generational thing. As in, I think it will take at least a generation or two, probably more. The web has been around for a generation now, and in many respects the expectations have gone the other way - people have increasingly come to be aware of censorship as something possible, and are largely not aware of the extent of the darker corners of the net.

Centralisation and monitoring appears to be of little concern to most regular people. People increasingly opt for renting access to content collection where there is no guarantee content will stay around instead of ensuring they own a copy, and so keep making themselves more vulnerable, because to most censorship is something that happens to other people.

And this both means that most people see little reason to care about a fix to this problem and have an attitude that give them little reason to be supportive of a decentralised solution that suddenly raises new issues to them.

Note that I strongly believe we need to work on decentralised solutions. But I worry that no such solution will gain much traction unless we deal with the above issues in ways that removes the friction for people of worrying about legality and morality, and that provides very tangible benefits that gives them a reason to want it even if they don't perceive a strong need on their own.

E.g. Bittorrent gained the traction it has in two ways: through copyright infringement and separately by promising a lower cost way of distributing large legitimate content fast enough. We need that kind of thinking for other types of decentralised content: At least one major feature that is morally inoffensive and legal that attracts people who don't care if Facebook tracks them or Youtube bans a video or ten, to build the userbase where sufficient pools of people can form for various type of content to be maintained in a decentralised but "filtered" manner. Not least because a lot of moral concerns disappear when people feel they have a justification for ignoring them ("it's not that bad, and I need X")

I genuinely believe that getting this type of thing to succeed is more about hacking human psychology than about technical solutions.

Maybe it needs a two-pronged attack - e.g. part of the problem is that the net is very much hubs and spokes, so capacity very much favours centralisation. Maybe what we need is to work on hardware/software that makes meshes more practical - at least on a local basis. Even if you explicitly throw overboard "blind" connection sharing, perhaps you could sell people on boxes that shares their connections in ways that explicitly allows tracking (so they can reliably pass the blame for abuse) to increase reliability and speed, coupled with content-addressed caching on a neighbourhood basis.

Imagine routers that establish VPN to endpoints and bonds your connection with your neighbours, and establishes a local content cache of whitelisted non-offensive sites (to prevent a risk of leaking site preferences in what would likely be tiny little pools).

Give people a reason to talk up solutions that flattens the hub/spoke, and use that as a springboard to start to make decentralisation of the actual hosting more attractive.


> But that's the thing: You don't skip it. You spread it to every user. They both have to deal with whether or not they are willing to host the material and whether or not it is even legal for them.

> How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?

Tor isn't the best example, because exits don't cache anything. So mainly, exit operators get complaints. And the exit IPs end up on block lists. Operators don't typically get prosecuted. Maybe they get raided, however, so it's prudent to run exit relays on hosted servers.

Freenet is the better example. The basic design has nodes relaying stuff for other nodes. In an extremely obscure and randomized way. Also, keys are needed to access stored material.

However, nodes see IPs of all their peers. Investigators have used modified clients to identify nodes that handle illegal material. So users get busted. There is "plausible deniability". But it's not so plausible when prosecutors have experts that bullshit juries. So users typically plea bargain. Or, if they use encrypted storage, they get pressed for passwords. Like that guy in Philadelphia.


It doesn't matter if operators get prosecuted or not. What matters if is people in general see running exit nodes as somewhat risky. Unless there is a reasonable perceived payoff, even a very minor perceived cost will be enough to stunt the growth of such a network severely.

Same goes for freenet and the like.


True. I don't run Tor exits from home.


While I don't disagree with your argument per se (not sure if I quite agree either, though), note that avoiding a decentralised platform because of being "worried about the consequences" is not necessarily the same thing as worrying that "the content (..) is content they don't find too offensive".

The first includes both legal and moral considerations, the second only moral ones.

My consideration of whether to share, part of the time, some slice of my home Internet connection bandwidth as a Tor exit node is almost entirely a legal one (I admit that time/effort may play a role too). I'd consider the moral aspect too, but I wouldn't have to think long to decide that (for me personally) the trade-offs are worth it (I could explain why and how, but I don't want to derail the discussion in that direction).

In fact I'd argue this goes for anyone, in some sense. Even if their underlying reasons align with the legal considerations (and thus not run one), it's a moral judgement. (in the worst case, there exist people who equate moral judgement with legality)


> I.e. "I don't want to see this kind of content" leads to "someone else should remove it from all the sites I visit".

I don’t think that’s always the mindset. Isn’t it reasonable for people to have the mindset of “I want to go to sites that don’t have content that I find objectionable”? This way websites can decide which group of people they want to cater to.


The context of the discussion is large websites acting as platforms. Their users are bound to have conflicting views about what's "objectionable". So when the moderation mechanism is deletion instead of letting users just filter then the website has to preferentially treat one group instead of being a platform for everyone.


But the deeper issue is that done people don't want certain content to even exist, and won't be satisfied with just a filter (even thought they can't tell the difference between filter and deletion).


I don't think Youtube censors extremist content because "I" don't want to see it. It's because "I" don't want anyone else to see it! There's no use me filtering my own videos if my goal is to limit what other people see.


This is at the heart of the issue: should platforms bow down and allow some users to dictate what others are allowed to see? Or should a platform remain neutral against anything (which means potential back lash)?


> modulo requirements by law

Which law?


German law against Pro-Nazi stuff would be an obvious example. Then child pornography. 'Normal porn' in some cultures...

the thing is law requirements do not generally allow you just to clear your cache of the offending content, the company is not allowed to show it.


Warning: it has this channel https://www.bitchute.com/channel/whitepower/ which is full of anti-semitic nazi stuff. It's visible on the front page listed above. You may or may not want to visit that link.

Channels aren't open to everyone, so it looks like they have manually allowed that?


Webtorrent does not meet the distributed requirement since webrtc needs signalling servers.

Plus the discovery component is still hosted on websites subject to the networking effect.


I think the only solution is a distributed and decentralized web.

I love the idea but one problem: who pays for it? It's a special case of the co-operative vs corporation problem. Without an individual's starting capital, how do you get off the ground?


Webtorrent is another torrent based media sharing


Agreed. If competition exists, customer service is one of the angles competitors can improve on to try to gain customers. For instance, one of the top reasons I prefer FastMail over Gmail is that real customer service I get.


That's not the worst part at all! Anyone could file a complaint about a mistaken algorithm; the companies are run by people, and they'll have to address their customers' concerns. The worst part is that humans use these algorithms to justify and enforce their own shitty decisions; to say it is "out of their hands" due to some inane, manufactured inconvenience.

>There's irreparable reputational damage associated with an algorithm libelously labeling something "extremist content" that isn't.

The damage is definitely repairable. You admit the algorithm is flawed and you tolerate exceptions to it.


I think they were talking about the content producer's reputation.


This seems like a false dichotomy. There is nothing preventing a human staff from declining to provide an explanation, nor is there any reason a machine learning algorithm couldn't summarize its reasons.

The issue isn't AI vs human, it's transparent vs opaque.


The opacity is by design. It's notable that such behavior by an individual would typically be considered psychologically abusive. It's one reason I talk about power relations frequently; we are in the throes of automating them, and given the impact of technology on other spheres of human activity we should be wary of what sort of social relations were are baking in.

Perhaps the Graph should be public domain. Perhaps too we are heading towards a world where reputation and legal identity are subject to casual destruction but there's no real barrier to starting over, much like when you die in a videogame.


I think the notion that machine learning algorithms are "unreasonable and impenetrable" is seen as a huge PR boon by these companies as it shifts the responsibility away from actual humans. So they try hard to promote it.

The fact is that there is always a human in the loop. Without human supervision these algorithms deliver a small but significant portion of incredibly stupid results. So an actual human has to sit down, analyze these results one by one and decide what to do (in some cases just hardcoding the "correct" answer). The general public must be educated about this stuff so that responsibility is not muddled.


Yep. The usual incarnation of the scapegoat is "policy." It sounds much better to blame a byzantine rulebook (which is the perfect tool for diffusion-of-responsibility) than to reveal that the strategists have decided to throw a subset of customers under the bus. In the case of monopoly, sometimes it's not even a subset.

Incidentally, this also explains why there is zero interest in making rulebooks available, conscise, searchable, etc. All of these would improve fairness, but rulebooks are actually an instrument of power, not of fairness, so existing power structures will typically oppose any such changes.


I mean, why not wait till it is deemed illegal by the authorities and due process? How can you take away someone's freedom of speech?

It's not like the AI has absolute idea of what is 'extremist' content, it's just enforcing someone idea of what it is. AI is trained on data, and whoever labeled that data is the person/s who are winning here.


> How can you take away someone's freedom of speech?

Nobody is taking away your freedom of speech by deleting your video.

Nobody is mandated to provide you with a vehicle or medium for your speech.


Anyone can deny any service to anyone for any reason?

Are you sure you are sold on right outcome of the baker/LGBT wedding-cake case? How about a pharmacist not telling correct/all options based on their theology?

How about a publicly traded corporation? Do they have a mandate to treat people equally? If they are picking a political viewpoint and removing customers because of it, what makes you think that their hiring practices are fair?

Google has a religion now, it has been baptized in the religion of intolerant left. Google is now theocratic, it will not allow blasphemous talk that challenges its religion.

Google claimed to champion Net-Neutrality, don't open the packet, they said to the ISPs. They want to resist opening of TCP/IP packet but when it comes to content of the videos, they want to play God. TCP/IP packet or Video, let the legal system take its course, let authorities tell you to ban something, don't play God on the platform that is valuable because of the sum number of people on it. YouTube is a social network, its value comes from people participating in it, treat the people equally and be an neutral steward of the platform technology, don't push ideology. Anyway, Google has damaged its image too much now. It will never be seen with same affection again, at least not by me.


I am on communities where I am fine with content being removed or fined without delivering a rationale(HN included). I definitely don't want government intervention everywhere.


These takedowns were done by humans. The videos were flagged by AI for human review.


The videos were flagged by AI for human review

The training parameters of that "AI" were set by a human too. Someone said to it "here are a bunch of videos that I PERSONALLY THINK are to be banned, learn from that".


I think the biggest problem is that unlike a judicial decision the thought process (including any highlighted pros and cons) that accompanies a decision is entirely missing.


This has always existed. We call it bureaucracy.


Bureaucracies are staffed with humans who have the capacity to make a judgement call on how to interpret the rules, and bend their letter to serve their spirit, or the interests of pubic relations, or just common sense.


Do we know how much YouTube involves humans in the process? I wouldn't be surprised if appeals go to a human which clicks the "yeah, nope" button.


I disagree. Have you ever been to the CA DMV with a photo of your license (that you lost/had stolen)? I have. They told me I needed a copy of my lost license to get a new one, or else the man couldn't validate my identity.

Bureaucratic hell, as defined by Harry Harrison in the Stainless Steel Rat series, is the definition of humans as automata.


This isn't just some AI gone wrong. YouTube has had an agenda for years. At the very least, people with power to undo these bans are complacent, there is simply no way that YouTube staff are unaware of the gradual crackdowns. Even content producers will mention demonitization occasionally.

Nothing major online happens by accident.


The worst part of the information age is arbitration by unreasonable and impenetrable algorithms rather than humans

The thing is, people only tend to notice it when it affects them personally (either they are the victim of the algorithm, or someone they know/like/support is). The world has long worked on irrational biases, which now are being used as the training data for decision-making systems which are subsequently declared to be "objective" because people believe an algorithm can't be biased. And increasingly, the mark of privilege is having access to a system -- applications, interviews, customer service, even courts -- which will use human judgment instead of an unreviewable algorithm.

For more on the topic I suggest the book Weapons of Math Destruction by Cathy O'Neil.


To be fair, the process needs to be much more than merely transparent. It needs to be independent.

That means that it needs to be done by an entity outside of Google itself, and not in any way associated with or influenced by Google.


Google is a company. You want video hosting to be run by the government?


-The services provided by Google and Facebook have an unassailable majority of market share and are relied on by a huge amount of people.

-They enjoy a de facto monopoly and are protected by the extreme cost, risk and time involved in building competing services.

-Finally, they have a potential for abuse (say, with selective censorship or politically biased algorithms) that could essentially curb the Constitutional rights of individuals

If these points sound familiar it's because they're frequently used when arguing for the nationalization of a private company. Since I think that's (currently) out of reach, I believe regulating Facebook, Google, et al as public utilities to be the next best thing.


Trusting government to regulate Facebook? No way. I lived in China; I have seen how that story plays out.

Some people really do have a naïve trust in government. Free markets are the answer. Who has actually made a legitimate attempt to compete with Google or Facebook? What VCs are investing in Facebook alternatives?

MySpace was unstoppable – until it wasn’t. Yahoo owned search – until it didn’t. Perhaps there ought to be more bold entrepreneurship rather than calls for regulation.

Sounds to me that people are ok with just giving up and giving Facebook the win.

Don’t like Facebook’s dominance? Then challenge it. Don’t cop out and just let the government take control.

History is littered with great companies toppled by better ideas and execution.


No way. I lived in China; I have seen how that story plays out.

It's naive to think that any one form of human organization, be it governmental or corporate, is somehow less corruptible than another. You're right to be on your guard against governmental abuses, but don't take your eye off the other balls in play.

History is littered with great companies toppled by better ideas and execution.

What we've seen lately are instances where one company topples another and proceeds to commit the same abuses, only more effectively and at wider scale. When Facebook replaced MySpace, were its users really that much better off? Which company had fewer rules and enforced fewer content guidelines? When one company dominates the market and locks it up with network effects, what incentive does that leave them to play well with others?


> It's naive to think that any one form of human organization, be it governmental or corporate, is somehow less corruptible than another.

You don't think any organizations have ever been any more corruptible than any other organizations?


Not once they reach a certain size, no. It turns into a pointless exercise in moral relativism. Joe brings up the Soviets, Jane counters with the East India Company. Bob rants about Trump, Betsy pulls up the Wikipedia article on Union Carbide. Sally complains about police brutality, Sam lectures her on the history of the Pinkerton Agency. Hank sticks up for the UAW, Mary criticizes the Teamsters.

None of these organizations should have been trusted implicitly to do the right thing for society at large. The burden of proof rests decisively with those who want us to believe that Google and Facebook are somehow different.


All of these organizations operated at their zeniths during different time periods, governments, cultural norms, and a variety of other factors. The problem with the conversation you're describing isn't moral relativism or that everyone in it has equally valid points, it's that none of them seem to be capable of isolating the elements of those organizations that functioned/dis-functioned without endorsing/criticizing the organization as a whole while still presenting a cogent argument.


We could save a lot of money and time by replacing the Supreme Court and Congress with one person each. Do you think there might be any disadvantages?


"Trusting government to regulate Facebook? No way. I lived in China; I have seen how that story plays out."

Are you really claiming that very government that tries to regulate corporations is going to wind up like China?

You know there are lots of governments around the world that regulate corporations, and most aren't anything like China.

"Some people really do have a naïve trust in government. Free markets are the answer."

Some people really do have a naive trust in free markets.


And please explain, how would you regulated them? They have nothing in common with how utilities work, so none of the utility regulatory models would work.

What are you actually proposing?


Google is a company. You want video hosting to be run by the government?

I'm with you. I don't want that, but at some point I expect to lose the argument. Google will cut their own throats with their smug "We investigated our decision and found it to be correct" pronunciations.

The fact is, any sufficiently dominant corporation is indistinguishable from a government. The more a company like Google behaves like a bureaucratically-hidebound public utility, the harder it will be to argue that it shouldn't be regulated like one.


Yes.

An alternative is to classify Google as a common carrier, exempting them from the DMCA, but preventing them from censoring or even throttling traffic, however given their business model is around sponsorship, it is unclear how to also protect the advertisers' interests. Trying to get government shackles onto Google simply seems too tricky;

It seems much easier to simply run Google with tax dollars and no advertising.


Google and Facebook are bigger than a single country now.


This is a strange way to define "bigger".

Google and Facebook combined have a total market cap (sum of all shares) is around 1.2 trillion dollars, which would represent only a 5% increase in tax revenue for America to simply buy all the shares.

However, the Government doesn't need to turn a profit: Google and Facebook combined spend only around $200 million dollars per year on R&D and operating expenses, which would be a rounding error on the tax budget.


There is very little difference between a company and a government when the first can lobby (read: bribe) the second.

I for one would welcome my files being hosted by the government if we lived in a world where democracy isn't more utopian than flying pink unicorns; as a citizen I would have a slight chance of being respected and listened to because I'd be a part of that, albeit a very small one, whilst with a company you have zero chances unless you're a stock holder or work there in some high rank. That's something to keep in mind next time they want to brainwash people about how good is privatization of public property.

We're slowly but steadily going towards a future where governments will first be owned by corporations, then will cease to exist or be relegated to a purely PR role (think about the royal families in nations still having them). That will likely mark the start of the worst period humanity will ever live in.


Governments are not owned by corporations in any way. Governing bodies have way more budget and powers, such as the use of military and the police when they see it fit. Of course companies try to influence or lobby officials but in the end governments are not owned by anyone, and the more officials you have the more unlikely it is to bribe them all.


AT&T Verizon, Comcast are also companies. But Google and FB campaigned to have them labeled as utilities and regulated accordingly.

Let us simply label Google (search, yt, news) and FB as utilities and regulate them too.


It doesn't have to be the hosting itself that's independent, but it would be an improvement if there was an independent body to which you could appeal.


How about by a non-profit?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: