Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The DNSSEC master key securing DNS is about to change (techworld.com)
79 points by jonbaer on Aug 29, 2016 | hide | past | favorite | 77 comments



Nothing about DNSSEC should concern you other than the prospect that it might someday actually be deployed.

http://sockpuppet.org/blog/2015/01/15/against-dnssec/


I wonder why moxie's Convergence [0] didn't take off. To me it sounds near-perfect. It could handle multiple layers of authenticity, including DNS.

The project has been inactive on GitHub for 5 years [1], however the Namecoin project picked it up [2] as "FreeSpeechMe" and added support for IPv6, and .bit, .onion, .b32.i2p domains. Now this, too, has been discontinued "because Mozilla removed the relevant API's as of Firefox 33". They "intend to release a replacement soon". According to Jeremy Rand [3] "A replacement is in the works (and is mostly working already)", but the source doesn't seem to be public or I can't find it.

Are there any downsides of Convergence?

0: https://www.youtube.com/watch?v=Z7Wl2FW2TcA

1: https://github.com/moxie0/Convergence/

2: https://github.com/namecoin/Convergence/

3: https://twitter.com/biolizard89/status/712525859194798080


IIRC, Google didn't want to run it because they'd end up running all the notaries themselves, which they didn't want to do.


So according to this and CiPHPerCoder's comment, the only reason it didn't take off is that it didn't take off?

Edit: found the comment from Adam Langley [0]. They didn't want to add it to Chrome because it wouldn't meet standards for the preferences UI inclusion, practically everyone would just rely on the default notaries, so Google would have to guarantee notary uptime, thus being forced to run them on their own.

He doesn't really point out why they don't include it as an optional alternative without any notaries pre-configured. He does however say that running it as an extension would be fine, but the required APIs weren't available at the time of writing.

0: https://www.imperialviolet.org/2011/09/07/convergence.html


I like it too. The cons just don't exist in my pow:

one downside was that it didn't work on things like airport access points, but those break regular tls too.

an other downside was the fact that google didn't want to integrate it in chrome because the notaries would have to be always up. now they run transparency logs that have to be always up anyway. so.. what's the difference?

The Perspectives Project piked up the idea, and it seems less dead that Convergence: https://perspectives-project.org/


To me, HPKP, surveillance, and God willing a better UX for certificates would get us 80% of the way to Convergence.


One big challenge was it didn't account for gated hotspots (hotels, Starbucks, etc) where you need to authenticate to reach the Internet.


> Are there any downsides of Convergence?

Your web browser grinds to a halt because almost no one runs a notary. I tried using it for a while.


If you're going to spread the "Against DNSSEC" FUD, it would be useful to give people the opposing viewpoint:

http://blog.easydns.org/2015/08/06/for-dnssec/

It's also worth pointing out that DNSSEC enables, for example, improvements to OpenPGP key discovery:

https://tools.ietf.org/html/rfc7929


https://news.ycombinator.com/item?id=10019029

See in that (old) thread the author's attempts to salvage those arguments. Zach followed up with me after that thread, by the way, and apologized for the tone of the original post you just relayed to this thread. I told him that was unnecessary, because I have no intention of ever being nice about DNSSEC. :)

Exactly what I'm hoping for, by the way. Sounds great. A PKI, controlled by the US Government, with 1024 bit RSA at the top, somehow connected to my PGP keys.


I don't demand that you be "nice about DNSSEC". :) But if you want to be credible then it might help if you said something more accurate than "1024 bit RSA".

http://blog.easydns.org/2015/08/06/for-dnssec/#weak

"1Kb keys are temporary and the industry is already migrating to 256-bit ECC."

http://blogs.verisign.com/blog/entry/increasing_the_strength...

"With this change, the root zone ZSK size will match the size of the KSK: both will be 2048-bits. One difference, however, is that the ZSK is changed (rolled) four times per year, or approximately every 90 days. The root zone will not be the first to use a 2048-bit ZSK. Approximately 60 top-level domains (TLDs) are known to be using 2048-bit RSA ZSKs already."


That's your comeback? "You're wrong, the RSA-1024 is only temporary, and only used at the root of the tree"?

The prosecution rests.


If you read the "Deployment Schedule" in that second link, you'll see "Sept. 20 [2016]: The first 2048-bit ZSK will be pre-published in the root zone."

Looks like the prosecution might only have a few weeks of rest. :)


It's the second half of 2016 and the best argument you have for DNSSEC is that they've at least scheduled the start of the removal of 1024 bit RSA keys for later in the year.

I'm pretty comfortable with what this thread says about our respective credibility.


That's great, but if you really wanted to clear up any doubt about your credibility, could you provide an estimate for how much it would cost an adversary to crack a 1024 bit RSA key within 90 days? You seem to suggest that's a significant threat that has to be addressed before anyone uses DNSSEC.


There isn't any doubt about tptacek's credibility.

He points out how dnssec puts control in the hands of whatever government controls the domain, and your counter argument is about the strength of keys used to do so.

It makes your argument seem less like an argument on the merits of the issue and whether or not tptacek is a credible voice in the matter. Further, to call his arguments FUD takes you further down a path that is not increasing your own credibility.


The question of whether DNSSEC "puts control in the hands of whatever government controls the domain" was addressed in the first link I sent:

http://blog.easydns.org/2015/08/06/for-dnssec/#government http://blog.easydns.org/2015/08/06/for-dnssec/#dane

Basically, whereas any CA (controlled by any government) can issue a certificate for any domain, with DNSSEC you at least get to pick which government you trust to secure your domain (or run your own generic TLD and not trust any government).

My counter argument about the strength of the keys was because he was the first to mention "1024 bit RSA". Did you miss that?

Finally, I used the term "FUD" to borrow the language from that EasyDNS article, the second sentence of which is: "Sadly, there is a lot of FUD out there and we wanted to both debunk that FUD and explain why DNSSEC is vital to the security of the internet." Hopefully people are reading both sides of the story here.


The best argument for DNSSEC is that it can be used to enable specific key discovery (and not in any actual security that it's providing per se). But it's not entirely clear that it's a good idea for that auxiliary purpose. Distributing PGP keys arguably would work better if you had the SMTP server report PGP keys when available (obviously, verification would have to come via external means). It certainly would be far faster to deploy this sort of "useful" service in an application-level protocol that all interesting clients already deal with, as opposed to requiring looking up another protocol that can be harder to access with standard APIs.


> `Should we be worried`

I don't understand why that's in the headline outside of being clickbait. The article itself mentions that meetings of the DNSSEC key holders are quarterly and regular, and that the hardware involved in the actual key rollover complies to the highest level specified by FIPS-140 (presumably actually FIPS-140-2).

I don't believe there's reason to be worried; the ICANN has historically taken reasonable precautions to ensure the safety of such meetings and exchanges.


> that the hardware involved in the actual key rollover complies to the highest level specified by FIPS-140 (presumably actually FIPS-140-2).

Eww, FIPS-140-2.

FIPS-140 compliance isn't a selling point. It's more of a genetic marker than anything. Most of the time, it signals to hackers, "we value standards compliance over security."

Did you know that ECB mode is FIPS-140-2 approved, but AEAD modes (i.e. what you should be using to encrypt) such as GCM and OCB are not?

Yes, that ECB mode. https://blog.filippo.io/the-ecb-penguin/


The very word "Ceremony" is a hint to the theatrics and purpose of the DNSSEC root key procedures. It's more about incanting a sense of authority in the integrity of DNS and creating the impression that DNSSEC provides some kind of transitive trust; if the procedures at the top are this robust, then that security and sense of legitimacy must flow down.

But in reality lookups to the root DNS zone are very uncommon; resolvers perform them only when occasionally boot-strapping the name-servers for the TLDs.

An attacker using a poisoned root zone to take over your traffic would have to wait between two days and a week for it to work, so it's not very interesting for a direct attack. A mass poisoning is even less useful: there is a relatively small number of TLDs and any false answers would be detected very quickly (the occasional operational mismatch is reported within minutes on DNS operations mailing lists). And if your goal is to block some DNS names, as happened in Turkey a few years ago, it's easier to simply drop the queries and responses for the domains you don't like; signing doesn't provide any help at all. Bottom line: the root zone is not very security sensitive, there aren't interesting attacks to mitigate in the first place, and none that TLS/SSL don't take care of.

Queries against the TLDs on the other hand are much more common; but what are the procedures for the .com/.net roll-over? or .ly? where are the live-video feeds for that zone signing ceremony? are there people walking around with Shamir-split copies of those keys around their necks and wearing them as if they were the guardian of the internet?


Perhaps slightly off topic, but I recall that the LibreSSL guys dropped FIPS entirely[1] due to technical and political concerns.

With that in mind, shouldn't meeting those specifications be seen as an antipattern?

[1]: http://marc.info/?l=openbsd-misc&m=139819485423701&w=2

And relevant HN thread: https://news.ycombinator.com/item?id=7634964


> I don't believe there's reason to be worried;

The only thing I see as a potential concern is that many DNS implementations build the root public key into the server, either at installation or at compile time.

The processes for rotating that key will be tested in the wild for the first time. I can only assume that some set of build/installation processes will have an undiscovered bug during this procedure.

Is it a big deal? Not really, I bet it would be fixed within a few hours.


...I think they mean 'should we be worried' that DNS will break due to the change. The article didn't really answer that though, instead just outlines the ceremony that takes place.

Does the key change frequently?


No, it doesn't change frequently. The article mentions that this is the first time it's changing.


The keys are stored in a FIPS 140-2 Level 4 HSM.



"The DNSSEC master key securing DNS is about to change. Should we be worried?"

No. Why should we be worried about anything in a protocol which has a deployment status close to zero on the client side?


Exactly. Does anyone know of any standard deployments that actually have the clients verify the DNS key chain?


Edit: I moved the OpenSSL 1.1.0 comment to the DANE thread, see

https://news.ycombinator.com/item?id=12382752


That depends on how you define a client. If you're a Comcast customer, for example, they are performing DNSSEC validation for you transparently. Recursive resolvers on most Linux distributions have DNSSEC validation by default.

One school of thought is to migrate DNSSEC from relying on validating resolvers sitting within the network provider, and moving it closer to the edge, in web browsers and so forth. Not much of that has happened to date.


> One school of thought is to migrate DNSSEC from relying on validating resolvers sitting within the network provider, and moving it closer to the edge, in web browsers and so forth. Not much of that has happened to date.

Fedora is actively working to including a system-local DNS resolver by default that will validate DNSSEC-signed zones. The change was originally slated for F24 but was deferred due to lack of resources, but I suspect it should make it in by F26/F27 if all goes well and some of the NAT64 issues that were discovered with Unbound are fixed.


What Fedora plans to do is to use a local resolver with a fallback in case DNSSEC doesn't work. By this trick they try to prevent the inevitable deployment problems of DNSSEC. Of course they also make it completely insecure and it doesn't make any sense.


The fallback is to be configurable, but breaking the entire UX is not desirable when a full recursive resolver is unable to function. Personally, I would like to see some indication as part of the GNOME Network icon whether the resolver is secure or not, but I have a hard time determining what that would look like and how to implement it without giving users false assurances or worrying them if they don't care.

It's a tricky problem to solve, but Rome wasn't built in a day either. I'll probably disable the insecure fallback myself and only turn it on when I connect to my work VPN, I already don't trust public hotspots so unless I can connect to my OpenVPN server at home I don't use them.


>if you're a Comcast customer, for example, they are performing DNSSEC validation for you transparently.

Also, if you are using Google's public DNS (8.8.8.8), they are doing DNSSEC validation for you... that is a significant percentage of the Internet today.


So what? You yourself are speaking plain-ol'-DNS over the public Internet, probably using UDP packets, to some data center that happens to have a Google DNS server in it. Why do you trust that link? And if you do: what is DNSSEC doing for you? Google's DNS servers are the most heavily cached and carefully monitored on the Internet.

Here we see the single most batshit aspect of DNSSEC: validation is done on the servers and not on the endpoints (you "can" do endpoint validation, in the same sense as you "can" run your own recursive caching server locally).

This is a relic of the early-mid 1990s, when the core DNSSEC design was finalized and any kind of encryption was thought to be too expensive to deploy to end-systems. It's also the reason DNSSEC is signing-only, with no privacy protections. The people who made those decisions were wrong. It was in fact possible to predict that cycles-per-byte costs for encryption would be driven down a rounding error in just a dozen or so years.

But we're still forced to live with that bad decision, because nobody in the IETF wants to admit they were wrong.


> Google's DNS servers are the most heavily cached and carefully monitored on the Internet

Is your point that we trust Google's DNS too much?

> most batshit aspect of DNSSEC: validation is done on the servers and not on the endpoints (you "can" do endpoint validation, in the same sense as you "can" run your own recursive caching server locally).

You "can" and we "should" be validating this at the client. So why don't we start doing this? We should be beating on the security layer in DNS as heavily as we do with HTTPS and other TLS implementations.

> This is a relic of the early-mid 1990s, when the core DNSSEC design was finalized and any kind of encryption was thought to be too expensive to deploy to end-systems. It's also the reason DNSSEC is signing-only, with no privacy protections.

There are several proposals at this point that resolve the privacy issue, DNSCrypt, DNSCurve, and DNS over TLS.

> But we're still forced to live with that bad decision, because nobody in the IETF wants to admit they were wrong

Given the number of revisions to the DNS RFCs, specifically around DNSSec, I don't think it would be accurate that they didn't admit being wrong, people are constantly working to resolve issues with the system.


If we're going to accept that shit is broken and we should go ahead and fix it, why can't we fix DNSSEC itself? It's a terrible, broken, already-obselete protocol that sees virtually no meaningful real-world deployment. Why would we spend tens of millions of dollars to deploy something we already know is broken.


see my comment in a sibling thread: https://news.ycombinator.com/item?id=12392121


I was not implying the opposite (and I agree that all of this without end-to-end is less than effective), simply stating that DNSSEC is used much more than the parent assumes.


Since very little of the Internet is DNSSEC-signed, I think it's used a lot less than you think it is.


  1493 TLDs in the root zone in total
  1343 TLDs are signed;
  1330 TLDs have trust anchors published as DS records in the root zone;
  5 TLDs have trust anchors published in the ISC DLV Repository.
http://stats.research.icann.org/dns/tld_report/


It matters that the TLDs are signed in the sense that DNSSEC is on its face a weird joke if BANKOFAMERICA.COM can get a signature, but .COM is itself not signed.

After 20+ years of standardization effort, we finally reached the point just a few years ago where Bank of America could theoretically get a meaningful signature, because .COM was signed (chaining to an RSA-1024 signature!).

But BANKOFAMERICA.COM is not signed. Nor is the overwhelming majority of the Internet. Most of the TLD's are signed, because they can be required to sign by fiat, and were. But for BANKOFAMERICA.COM to be signed, the market has to recognize some value for the effort of signing and then keeping the site signed (because if they screw it up somehow once they do sign, they'll be taken off the Internet by the small but unfortunately significant portion of end-users whose ISPs are, without them asking for it, validating DNSSEC lookups).

Smart operators are unlikely to do anything like that, because they all saw (for instance) what happened to HBO NOW, which was unavailable to anyone with a Comcast connection on its launch day due to a DNSSEC glitch.


What is the security like in these meetings? Is any security agency taking part on these ceremonies? What is preventing a terrorist group to launch an attack against this people considering the importance of all of them? I am probably exaggerating because all this reminds me of the not-so-old movies about hacking a la "Mission: Impossible" film series. Do they say that the meeting is going to take place in one place — like in the article it says El Segundo, CA — but actually meet elsewhere? Do the key holders have security 24/7 in their respective countries to prevent stolen keys?

Again, I am misunderstanding the information here, but the keys are actually useless, even if you get your hands on all of them to attempt an incursion against the building where the KSK is, you would not be able to do anything because the security of that place is probably good enough to counter an attack like that, and supposing the attack takes place the other DNS servers would simply disconnect from the root and serve the information that they have until it gets outdated. So the only scenario that I can picture in my head is one where a terrorist attack goes against the building to disrupt the functionality of the whole system for anarchy reasons.


IANA's keeps this archive of all the proceedings, signed logs, and even multi-camera recordings of the signing ceremonies:

https://www.iana.org/dnssec/ceremonies

CloudFlare has published an account:

https://www.cloudflare.com/dnssec/root-signing-ceremony/


The security is designed to guard against surreptitious access. Interested members of the public are welcome to attend ceremonies (availability on a first-come first-served basis).

There are multiple redundant sites and backups of the keys, so an attack aimed at destroying a site should not alone be a problem. Even if all all sites and backups were destroyed, there is a window where a new key can be constructed (approximately between 3-6 months worth of operational zone-singing keys are prepared in advance for the root zone).


One window of vulnerability is the after-party where everyone with a piece of the signing key drinks cocktails together.


Ceremony attendees don't leave with part of the signing key.

Despite the folklore around these ceremonies that there are 7 people with "the keys to the Internet", those seven only retain a metal key that is used to access a safety deposit box in the ICANN/IANA facility, each of which contains a smart card of which there is an "m-of-n" configuration to activate HSMs containing the actual key.


Thanks for the correction. :)


This is the human equivalent of an HSM.


I'm not sure if the author really understands just how irrelevant DNSSEC is as of right now.


I don't know why you're being downvoted into oblivion.

This is a legitimate observation. When I saw the submission title, I momentarily thought "Huh? Did DNSSEC finally take off?".

Spoiler: The answer is no. There are more effective ways to deal with security at the higher layers of the OSI model (such as relying on TLS encryption/identity).


While this is true, what DNSSec solves is the distribution of trusted identity.

It's one of the lowest layers in the Internet, if you can hijack DNS, then all connections the client is routed to can be used to exploit higher level security features like TLS. Yes, things like pinning of certificates can reduce this, but all a bad actor needs to do is get access to a poorly managed x509 trusted cert.

DNSSec does offer a layer of trust that most people lack today.


> While this is true, what DNSSec solves is the distribution of trusted identity.

Attempts to solve. I don't trust identities just because the US government tells me to do so.

I believe certificate transparency + ubiquitous TLS + RFC 7858 (or DNSCurve/DNSCrypt) is a better cocktail.


You trust Verisign? You trust the USG? You trust the UK?

DNSSEC was hijacked --- by old-school Unix people in the IETF who loathe X.509 on aesthetic grounds and by the USG, which attempted to mandate DNSSEC --- to integrate with and replace X.509 CAs.

You write as if DNSSEC could be deployed alongside the X.509 CA hierarchy. No.

On the real-world Internet, browser trust validation has to work for everyone, or it doesn't work at all. Common-sense workarounds, like "we'll do DANE for the people who can do DANE, and then fall back to CA validation" fall apart when confronted with adversaries. Anything you "fall back" to becomes the default for purposes of security, because attackers will launch downgrade attacks to make that happen.

Don't take my word for this:

https://www.imperialviolet.org/2015/01/17/notdane.html

The net result is: DANE doesn't collapse Internet trust down to a single point; rather, it provides us with a 1036th CA. One that just happens to be run by world governments.


If I understand your basic argument, it's that shit is broken, and so fuck it, we'll just not bother with it. At what point do we decide that we should fix it?

New DNS record types not being returned, yes this sucks, it means there are a lot of firewalls and resolvers that need to be fixed. But IPv6 has the same upward battle in terms of router support, finally making a lot of headway. As people start demanding DNSSec, anything that is blocking that will need to be fixed.

RSA 1024 keys should be replaced, but again, we should be working to fix this, should we not?

> it provides us with a 1036th CA. One that just happens to be run by world governments.

Validating a different set of data though. On top of that, TLD's can start to be ranked based on their reputation.

Just because things don't look good right now, doesn't mean that we can't work to fix them and make them better. Similarly we need to start pushing for BGP to get secured, as attacks on that network are significantly on the rise. I don't think we should just ignore the lower stacks of the network.


DNSSEC has no meaningful real-world deployment. Nobody on the entire Internet has ever been protected by a DNSSEC validation. So if we're just going to fix things and make them better, let's fix the protocol, instead of deploying something that is broken and obsolete before anyone even starts using it.


Do you have any specific recommendations? Or something to point me to? I'd love to help fix the protocol.


Sure. DNSSEC was designed in the early-mid 1990s on the premise of "pick two: offline signing, authenticated denial, secret hostnames". They picked offline signing and authenticated denial. They picked wrong: the protocol should be redesigned to assume online signers, taking advantage of cryptography in every transaction.

Obviously, DNSSEC should provide query secrecy. Obviously, every DNSSEC transaction should be encrypted. That's functionality not present in DNSSEC that must be before the protocol can be taken seriously. Resolver-server encrypted transactions also solve the problem of endpoint signature validation (because there's no weak link between servers and clients anymore).

Daniel Bernstein, among others, was smart enough to realize that if you encrypt the connection between a resolver and a server, you don't really even need signatures anymore: if every link in the resolution chain is encrypted, there's no straightforward place to launch spoofing attacks anymore. DNSSEC advocates will point out that this doesn't make injecting fake DNS records 100% impossible. But it makes it so difficult that it's not worth it to any plausible attacker.

There's nothing you can do about the fact that DNS is a hierarchical PKI controlled by governments. DANE simply can't be made to work: DNS has no business holding TLS certificates.


I think I completely agree with you:

> the protocol should be redesigned to assume online signers, taking advantage of cryptography in every transaction.

When you say online signers, do you mean not actively storing RRSIG records for RR sets at all? Or do you mean dynamically resigning the zone after any update? I think there is a benefit to these records in validating they originate with zone from which they claim to be. The biggest concern I have right now with the protocol is that having multiple nodes being responsible for managing a zone is difficult because each has to be signed by the parent zone, though with the necessary automated support, this shouldn't be hard to overcome (i.e. makes HA harder).

> Resolver-server encrypted transactions also solve the problem of endpoint signature validation (because there's no weak link between servers and clients anymore)

Do you have any thoughts on DNS over TLS vs DNSCrypt? I got halfway through implementing DNSCrypt, but I have some concerns about it being easily exploitable for DOS attacks. And am switching to DNS over TLS for now.

> Daniel Bernstein, among others, was smart enough to realize that if you encrypt the connection between a resolver and a server, you don't really even need signatures anymore...

Yes, this is true, but the concerns I've had with this are 1) ability to cache data and 2) ability to add nodes to the network. While I completely agree with this, we need to come up with a method that allows new nodes to be added to the graph, which might be cumbersome from a resolver perspective. In terms of zone authorities it should be relatively straightforward though. With the current DNSSec design, it allows for trusted data to be stored on untrusted nodes in the graph, making cached data straightforward.

If you don't mind, I'd love your feedback on a project I've been working on for the last year: https://github.com/bluejekyll/trust-dns

I want to try and deal with many of these aspects of the protocol, from the slant of making it easier to setup and operate. I currently don't have a lot of documentation around operations, as I'm still focused on implementing some things like DNS over TLS. I'd love to bake some of these concepts into the software, so if you have any feedback that would be awesome.


>DNSSec does offer a layer of trust that most people lack today.

A comforting 1024bit RSA blanket.


Looks like this is about to change.


In the context on DNSSEC "about to" is decades.


>I don't know why you're being downvoted into oblivion.

Recurring pattern, just like the author of the article it's rather likely that due to it's irrelevancy the downvoters just don't know what DNSSEC actually does.

It sounds like a cool security feature for DNS until you look at what it actually does and realise that it doesn't work.


>The same ceremony will be repeated at the secondary master key facility in early 2017 before being used in anger for the first time at a third get together in Q2.

Is this a typo, or there some reason they're going to be angry?


"...used in anger" means used for the purpose it was intended for, not just in testing or rehearsal.

The analogy is to a spear or a gun; when used in anger, it is used to hurt somebody.


The new root zone key signing key will be generated at the next key ceremony. At the ceremony after that, the "same" thing won't happen, rather, the key generated will be replicated to the second site. Operational changes using the key won't happen until the key is confidently stored at both locations.


My reading of this is that it is a colloquialism indicating that that the initial actions are preparatory and in the background, with the third action being the actual cause of visible changes downstream.


"This is only the beginning. The same ceremony will be repeated at the secondary master key facility in early 2017 before being used in anger for the first time at a third get together in Q2."

???


Interesting but ridiculous approach to hiding content because of ad blocking: set the article's font to "redacted_scriptbold".


Particularly when you have no control over the ad blocking.


Even more low tech, select all and paste it elsewhere.


Yet another site that confuses an ad blocker with a privacy guard. Is there some kind of pre-canned explanation or site I can point them at?


What type of key is it going to be, RSA or ECC? The article omits that detail. ECC would be a huge step forward...


It is going to be 2048-bit RSA, same as the existing. It was considered rolling the key for the first time would be complex enough that tracking the effects of an algorithm change as well probably should wait for a future opportunity.


TLDR: No.


'No'




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: