Hacker News new | past | comments | ask | show | jobs | submit login
Fix TLS – Let’s Get Rid of Certificates (xot.nl)
84 points by ycmbntrthrwaway on Feb 28, 2017 | hide | past | favorite | 72 comments



> ... a system where ... information about the domain being visited is displayed from various sources (e.g. Google, or known phishing domain databases like PhishTank) that have a global network picture and hence are in a unique position to tell very early on whether a certain domain is untrustworthy.

So, a trusted third-party, then? To, you know, certify the site? Like... a Certification Authority, perhaps?

> If this is the first time you are visiting this website, a warning message will appear. Your browser will provide some useful information about this site (see below) and ask you whether you want to continue and really visit this site.

How's that going to work when visiting any of the majority of sites today that pull in resources from perhaps 20 or more different domains? Google Analytics, some other analytics, ads, all the trackers, ...

On a whim, I just pulled up CNN's web site [0]. uBlock Origin reports a total of 21 domains. How is the average user supposed to know if zor.fyre.co is legitimate -- or a tracker or a phishing site or "evil Russian hackers" and shouldn't be trusted?

There's a lot that's wrong with the current state of PKI on the web (too many CAs, questionable CAs, etc.) but that doesn't mean we should just throw it all out tomorrow. We're stuck with it -- at least until something better is devised and rolled out (which would take years!) -- so the best we can do is hope for continued incremental improvements, such as TLS 1.3.

[0]: http://www.cnn.com/


I don't want a web where every time a user visits a new domain they see a scary interstitial warning that asks them to make a trust decision. We have too many centralizing forces already.


> I don't want a web where every time a user visits a new domain they see a scary interstitial warning that asks them to make a trust decision.

I don't want it either as it easily trains users to click though it.

User goes to new website: "Oh yeah I have to click allow <CLICK>"... User returns to site but its now being MITM'ed: "Oh I thought I had clicked allow. <CLICK>"...

Even if the warning is something like "the public key for this site has changed, do you wish to continue?" The gen public will click continue. The problem is not designing a system us on hacker news can use, understand and keep us secure but its building a system that "non-technical" (Throw in generalisation about parents/grandparents not understanding the interwebs here...) minded people can use, understand and be kept secured.


> How is the average user supposed to know if zor.fyre.co is legitimate

The current system doesn't grant any legitimacy, it only binds a name to a cert.


I completely disagree with this. Public key pinning has some well known problems that make it very dangerous to implement at scale [1].

There are some very large problems that this introduces and does not solve:

- How do kiosk machines now work? How can I trust any website when every use is first use?

- What about https domains for images and scripts? Is the user supposed to trust each domain separately?

- Assuming trusting a domain trusts all its other domains mentioned in its content-security-policy, how is this trust revoked for misuse?

- Is the author suggesting that we retain all users of browsers everywhere? I believe that to be a herculean task.

Certificates have their problems, but this is not a solution. I will not propose one here, because it is an extremely complicated problem that needs many separate parties working together to solve.

[1] https://blog.qualys.com/ssllabs/2016/09/06/is-http-public-ke...


> How do kiosk machines now work? How can I trust any website when every use is first use?

Should you ever really trust kiosk machines? They could easily be setup with MITM eavesdropping software.

> What about https domains for images and scripts? Is the user supposed to trust each domain separately?

Presumably there could be some sort of <meta> header that listed the public key fingerprint/IDs of any subdomains that the page was going to pull in. This would make the UX better.

> Assuming trusting a domain trusts all its other domains mentioned in its content-security-policy, how is this trust revoked for misuse?

AFAIK, revocation isn't very good even with the current CA infrastructure[1][2].

[1] https://news.netcraft.com/archives/2014/04/24/certificate-re...

[2] https://www.maikel.pro/blog/current-state-certificate-revoca...


> Should you ever really trust kiosk machines? They could easily be setup with MITM eavesdropping software.

I should have the choice. This doesn't even give me that.

> Presumably there could be some sort of <meta> header that listed the public key fingerprint/IDs of any subdomains that the page was going to pull in. This would make the UX better.

This would make the UX barely passable. If any of these domains change the content at all so the hash changes, how does it get updated?

> AFAIK, revocation isn't very good even with the current CA infrastructure[1][2].

I completely agree with you, so lets not make it any worse.


> I should have the choice. This doesn't even give me that.

I don't understand why you say this. Can you explain more?

> This would make the UX barely passable. If any of these domains change the content at all so the hash changes, how does it get updated?

Not a hash of the contents, just the sub/external domains' key-ids. Yes, the main page would have to change if you updated the keys. Doesn't seem too onerous to me.


>> I should have the choice. This doesn't even give me that.

>I don't understand why you say this. Can you explain more?

Not who you replied to, but with this system, I need to trust everything between the kiosk and the website server to not be MitMed. With the certificate system, I only need to trust the kiosk itself. Specifically, I need to trust the browser does TLS right, and I need to trust the installed root certificates are correct.


This is my thinking.

>> Not a hash of the contents, just the sub/external domains' key-ids. Yes, the main page would have to change if you updated the keys. Doesn't seem too onerous to me.

Then that means each external domain has to tell all its linkers that its key will change whenever it does. Assuming it even has such a list. What if a party doesn't respond? I understand it would be said parties problem, but it sure does make re-keying difficult.


> Should you ever really trust kiosk machines? They could easily be setup with MITM eavesdropping software.

This has already been answered, but I'll offer a different perspective. For a lot of people kiosk machines are the only way they get to use the internet. Not everyone has the privilege of owning their own computer or internet-enabled phone.


> I completely disagree with this. Public key pinning has some well known problems that make it very dangerous to implement at scale

I don't know what you are commenting on, because the author of the linked article suggests we use Trust On First Use, not HPKP.


which is aweful from a user perspective, not everybody is an engineer. many many people would be more confused by that.


1) So I boot my new phone, connect to some coffeeshop's wi-fi and navigate to some HTTPS site. The coffeeshop's captive portal intercepts my request to show me a TOS page, sending me their public key instead. Now my browser has that key remembered for that domain, and will error when I visit the real site.

2) Key rollovers are accepted silently if signed with the previous key. So an attacker who steals the key can sign their valid ones forever, right? If the server can invalidate previous keys to reduce that risk, what happens when the attacker uses that feature themselves against the actual site owners?

3) How is TOFU better than previous proposals like Perspectives and Convergence?


> 1) So I boot my new phone, connect to some coffeeshop's wi-fi and navigate to some HTTPS site. The coffeeshop's captive portal intercepts my request to show me a TOS page, sending me their public key instead. Now my browser has that key remembered for that domain, and will error when I visit the real site.

yeah.. this brings us back to basically the same setup as before.

your device needs to reach out to some 'trusted party' to ask it whether it should warn you about the key.

how does your device connect securely to this trusted party, so someone can't mitm the trust question? ship the trusted parties key on the device.

.. sound familiar?


Not necessarily "on the device"—they can be part of the browser. Or—for phones at least—part of the carrier profile. In either of those cases, rather than one centralized authority, there's an explicit chain of trust: you trust the browser maker, who trusts set X; or you trust your ISP, who trusts set Y.

That's more in the mindset of how the original X.509 design was supposed to work: you trust your company/university, who installs set Z on their workstations. Except, in this case, you're much more free to switch who you trust.

Of course, in the "carrier profile" case, a lot of the smaller ISPs might delegate their cert-verification to some company set up specifically to be an, ahem, Authority for Certificates.

But those companies would be "CAs" in a very different sense than the one we have today: they wouldn't issue certs; they'd instead generate pin lists and run OCSP portals, i.e., verify certs—including certs that are not their own. They'd essentially provide "TLS stack plugins" for devices that subscribed to them, instructing those devices on what makes for a valid or invalid cert. (Or you could build the infrastructure the other way around, and make a "TLS thin client" library that talks to its CA every time it wants to know if a cert is valid. Given that OCSP already exists, it wouldn't be that much more traffic.)

This would also mean that the TLS "technology" could evolve much more quickly, since TLS extensions wouldn't have to get 100% traction with TLS client software before they'd be useful. Instead, the extension would merely need to be taken into consideration by one CA when that CA generated its pin-lists and OCSP databases (or, dynamically by webapp-like software, in the TLS-thin-client case.)


Almost, but now you only have to trust one third party, not 189 (/etc/ssl/certs).


There needn't be a lot of difference, if that single trusted party still signs a lot of other CA certificates.

You don't just need to trust the root certificates intsalled, you need to trust all signed CA certificates those root certificates installed (except for public key pinning, which will ignore installed root certificates).


Re #1, it's way more likely you are visiting a site you've already been to on a new wifi hotspot, so the TOS page will refuse to render if they hijack the domain like that. They'd have to work around that themselves otherwise no one would be able to use their wifi. Maybe a https downgrade attack followed by a redirect to another domain so that they can give you a key that doesn't interfere…


Re #1, it's way more likely you are visiting a site you've already been to on a new wifi hotspot

Maybe. Unless you're poor, in which case you often don't have any other access besides wifi hotspots.


Or you do something radical like a search on Google.


Setup your new phone on a trusted network, not at a coffee shop


In practice I bet a large portion of people set up their phones at cell phone stores. Do you really trust at&t/T-Mobile/sprint/etc to have properly secured WiFi? My bet is they don't.


You visit all sites you ever want to visit during setup of your phone?


This proposal makes no sense. It wouldn't improve security, it just proposes an annoying UI that in itself is a security risk, and the stripping away of several useful features of the current trust system.

Critically, unbinding public/private keys from DNS names removes a layer of trust. I trust my browser-maker to grant only legitimate CAs approval to sign certificates that will be accepted by default by my browser, and my DNS provider to send me the correct IP for a given name, and I trust myself that I know the DNS name of the site I want to visit. Those three things work in concert with each other to give me a relatively high level of assurance that I'm talking to the site I mean to be. What's more, thanks to certificates, I can verify this all myself! Export the site certificate, and check its signing chain to go back to the CA, which I can also decide for myself if I want to trust or not, if I don't trust the browser-maker's judgment.

Add to that certificate revocation, and I fail to see how the proposed plan makes any sense at all. The author never makes a case that name-based certificates are actually a problem. Browser makers are free to add extra layers of security to the current system that could do what the author proposes. But from little details of what he says I suspect what's really going on is that he just doesn't want to pay for certificates or deal with setting up Lets Encrypt.


Having spent a few years dabbling in TLS stuff recently... I have some comments.

This article is really about PKI more than TLS, and what it means to be a user's agent ("user agent"). Forgive my critical tone in advance, it's just that proposals like this need to go through the wringer...

> you are supposed to be certain that you are communicating with the website you intended to visit. the green padlock in front of the URL in your browser tells you TLS is securing the connection. But this reasoning is wrong.

No, it's not. It's correct to reason that the green lock (as implemented by most modern browsers) indicates a secure connection to the site in the URL bar. Now, whether that site is behaving virtuously... is a different matter, and not one that TLS is designed to solve. It also doesn't mean TLS is broken. (TLS is quite good, especially the emerging 1.3 standard. Qualms with TLS usually end up being about PKI, which is the best thing we've got widely deployed right now, whether you like it or not.)

"Intended to visit" -- when we look at the browser as the role of the user agent, or in other words, a program that is supposed to act helpfully on behalf of the user -- the browser infers intent from what the user tells it to do. If you have a private chauffeur and you tell him to take you to such-and-such place, he will go there, even if you intended to go to such-another place. Oops? Maybe agents who know you really well could double-check with you if they think you meant something else, but do you really want your browser to know that much about you?

> The real solution: get rid of certicificates

This is ridiculous. Certificates make possible the cryptographic exchanges that guarantee confidentiality, host (and client) authenticity, and data integrity. Don't throw away the single most valuable component of modern Web security.

> Being able to tell that we are visiting the same website as before allows us to establish a trust relationship with that website over time. With every positive interaction, our trust increases (and with every bad experience, our trust drops). We don’t need the website’s name for that.

This isn't about TLS, this is now about PKI and trust model issues. Yes, there are various trust models and each have advantages and disadvantages. We went with PKI for the wider Internet because it fits the mold best when connecting machines to other machines at scale. In contrast, WoT and other models (including TOFU) work better in small groups of people or machines in semi-controlled environments (e.g. TOFU is manageable with your small-medium-sized fleet of servers).


Agreed. Straight TOFU without CAs is a bad idea usability wise; we're still struggling to get the UI down for padlocks... But when you combine TOFU and CAs, well, you get something like HPKP, which is a good middle ground, I think.


> TOFU (Trust On First Use) principle ... used by ssh

No. That might be how you use SSH, but you are not really using SSH correctly IMO so you are not qualified to tell me HTTPS should run like SSH...

For a properly secure connection you should be verifying the fingerprint against a reference you received through a different trustworthy communications channel. If I'm connecting to my own servers I know the value is correct as I setup the server and have record of it. If it is a newly setup machine/VM as a hosting location I might have to go on trust that the first one I see is OK but if I'm connecting to an existing service I expect to be told the fingerprint so I can check it. FYI: in my day job I handle file transfers potentially containing our clients' client data so we have to do this sort of thing properly.

Ignoring the "you are doing SSH wrong" thing for a moment, there is another major flaw in TOFU: users will just click OK without thinking when presented with an "are you sure" question. The people most vulnerable to the current problems with TLS are not going to have their risk profile changed at all (or if it changes it'll be for the worse) by this proposal.


Just so you know, in case you don't, you can use certificates for SSH these days. Not X.509, but certificates nonetheless in that they bind a set of names or IP addresses to a public key and signed by a CA you trust, which, for your machines, will be a CA you control.

Now I can deploy as many new machines as I like and I only have to sign their host keys for all of my SSH clients across my machines to trust that new machine before I first use it.

If this is new information; I have left example instructions at https://paste.debian.net/plainh/6b332023


I'm aware of certificate support for SSH, though getting out clients to use something other than what they "have always done" is somewhat painful. It isn't just browser choices and desktop OSs in which they are firmly stuck a decade and a half behind...

Certificate for SSH isn't something that the OP is going to like though, as he is trying to use SSH as part of a rational for doing away with certificates for HTTPS...


Right, but the old clients that don't support Host Certificates will just keep using the Host Keys that they always have, and prior confirmation or TOFU or whatnot. It can be of great benefit to the new clients at the same time, though -- because you need to put the hostkey and the certificate side-by-side on the sshd.

Clients that have a CA known_hosts(5) entry for a given name (or a mask that matches the name they're connecting to) will prefer host certificates over host keys and attempt to verify them that way if the server presents one, but if that fails they'll still fall back to regular host key validation (unless you explicitly set the HostKeyAlgorithms directive to eliminate non-certificate-based ones).

It's the best of all worlds, in my opinion.


>Your browser can easily distinguish these two cases if the new key in the header of the webpage is actually signed with the previous key for the same website (whose public counterpart is stored by the browser).

I don't see how this can work. If we need to update keys because of compromise, the keys cannot be relied on to sign the new keys, because they have been compromised. Nor can such a case be pre-announced.


So, essentially the same way SSH works?

The main problem I'd see is training users to get that initial trust phase correct. Too many wouldn't try to parse the info given and just click Trust regardless (or run away from every site with the warning).


The same way, except SSH is generally used for a small number of hosts that the user itself (or their organisation) controls, so there's some chance for out-of-band verification.


> So, essentially the same way SSH works?

Not if you are doing SSH right... That "fingerprint validation" message you get the first time you connection to a SSH host: you should be verifying that against a copy received through another (preferably secure of course) channel, not just blindly trusting the first fingerprint you are shown.


RFC 6844 defines a DNS record (CAA) which supposedly allows you to restrict which CA's are allowed to issue a certificate for your domain. Of course, this is on the honor basis. https://tools.ietf.org/html/rfc6844


I've thought for a while that CAs should be restricted to a certain subset of the DNS space. So say CNNIC should be limited to .cn. The US DoD should be restricted to .mil.us. Google's CA could have their root for .google.

That way, we can add CAs and they can be trusted at the root for a small amount of space.

Obviously, we'd need to do some amount of grandfathering, but CAs should know who they've issued for.

Is there any reason this isn't plausible?


There's a new Name Constraints TLS extension to implement it: https://tools.ietf.org/html/rfc5280#section-4.2.1.10

See also https://wiki.mozilla.org/CA:NameConstraints


(Edited to explain about v509v3 name constraints)

That's not a TLS extension, it's two separate things you've linked

The first thing is a standard feature of X509v3, which doesn't help you at the root because although root keys are distributed in the form of x509v3 certificates for convenience, they aren't really treated as certificates, after all you're trusting them, if you can't trust the root then Game Over. We use these name constraints all the time, they often protect a corporate subCA.

The second thing is a proposal to Mozilla policy from 2015, ie, if it was implemented, this would be a change to the behaviour of Mozilla's TLS client, most specifically Firefox.

It isn't currently being discussed by m.d.s.policy, the group which decides Mozilla policy. If you care about Web PKI policy m.d.s.policy is basically the only place you can influence it, everybody else of any significance makes these decisions in private (e.g. Microsoft, Apple, Oracle) or takes their cue from m.d.s.policy (most Free Unices, more obscure browsers).


This would be awesome for my company (SAP), since it could get its internal root CA into the standard set of root CAs, with the name constraint on the "sap" TLD. (Yes, that TLD is a thing.)


People looking after the Web PKI aren't interested in adding trust for such a root which doesn't deliver any new value to their ordinary users. The US Federal Government is about to try this too, and I predict they'll struggle to jump all the hurdles (this isn't the FPKI, that's separate). You can get an existing CA (or more than one if you're paranoid) to write you a name-constrained subCA certificate today instead.

This will cost a lot of money, but if they're pointlessly running a TLD then SAP is already burning money, presumably they can just fork a few million extra onto the bonfire.


That can be secured with DNS-Sec, leading to DANE. It is often suggested as either an addition or replacement for the CA system.

However, this is basically the CA system in disguise. Instead of trusting CAs, we need to trust domain servers. Thing is, domain servers tend to be controlled by governments even more so than CAs. There are some other issues with the technical implementation of DNS-sec, though I do not recall them at this moment.


For DV certs you need to trust CAs and domain servers. Trusting only domain servers is an improvement.


I think ultimately you end up having to have a web of trust, but this assumes that the devices of the people in your web have not been compromised (backdoored apps and/or crypto libs).


I think a system like CAs or DNS-sec, with hierarchical trust is a good base. However, we'd need to add certificate transparency and 'triangulation'. Certificate transparency is a centralized way to detect wrongly-issued certificates. Triangulation is a peer-to-peer way, where a client asks a few other clients what certificate they have.

I like a DNS-based approach, because it ensures only one authority can authorize a key, though it has the issue of often being a state-controlled authority.


I tend to like the DNS approach as it is supposedly non-commercial and non-state operated, but again you have to place your trust in a group of people you do not know (I have never met one of the root signers before), so you have no way to know if they or their equipment have been compromised by a state actor.


You already trust the DNS root, as long as domain names are what certificates are based on. If you can transfer a domain name illegitimately you will have no problem getting a completely legitimate certificate signed.

But the fact that domain names are hierarchical means you have to trust one or two entities at most. That's why domain name theft is historically less common than illegitimate TLS certificates.

You'll find all sorts of arguments that the CAs do more things than that and no end of security professionals who work for them. Some of these arguments are probably correct but that doesn't change the overall picture how a secure global TLS PKI should operate in the presence of domain names. It does the current system quite resistant to change however.


Sure, the root keys of DNS are non-state operated, but they are used to sign the DNS-sec keys for the TLDs. These, in turn, are used to sign the keys for the nameserver responsible for serving the normal domain.

For any country TLD domain (.us .uk .nl), those keys are held by a state actor. That TLD then gets to sign any lower level DNS servers.

Say whomever controlled .com wanted to change the key for hackernews.com. They'd simply change the .NS record for hackernews.com to point to a different DNS box with different keys, that proxies DNS requests for the real server but re-signs them. It can then simply serve a new DANE record. (DANE records are the DNS-sec version of CAs).

This is essentially the same procedure as a CA issuing a wrong certificate.

The source of this problem is that you don't just need to trust the root DNS servers, but you also need to trust the TLD-servers that are signed by the root DNS servers. (And down recursively if there are more NS records in between)


No matter what, you have to trust someone, and any method that requires first-time click through every time on every new website is bound to exhaust users, resulting in them clicking through blindly. At least a certificate violation is a rare enough event, although as mentioned, it can potentially be corrupted by the CA. Regardless, you trust the browser implementation.

A better solution might have browsers try to intelligently identify suspicious looking URL's (including URL's that start with a popular, external URL, like paypal.com.notpaypal.cc). As for the actual certificates themselves, perhaps this is a good application for a Blockchain technology, with certs being created by website owners and submitted to a Blockchain by a machine assigned to the DNS name that wants to create its cert. But in this case, we still require DNS which is built on trust... so perhaps DNS could also be put on a blockchain. How? I don't know. But until we figure that out, we have to rely on various organizations to provide trust.


resulting in them clicking through blindly

In the proposed scheme, they are ALWAYS clicking though blindly. There's no way for a user to know when he gets the original public key or someone else's. How is a user supposed to verify it, call the owners of the website and ask them to dictate their public key?

This warning screen is thus just a security theater.

The whole proposed "solution" is not a solution at all and doesn't work — the author could have saved a lot of time typing this blog post if they just listed threats that certificates are trying to solve on a piece of paper and put a check mark next to each if their proposed "solution" solves it.


Nice writeup - good to see a solution proposed, rather than the same complaints about 'how TLS is broken'!

That said, the key flaw that's raised at the start of the article (how do you know that amazon.de is Amazon) is not really solved with this solution. You still need a trusted third party to bootstrap that first visit.


For most people that's currently Google due to search-based location bar.


Moxie had a talk called "SSL and the Future of Authenticity" at blackhat about an old project of his called Convergence. The talk is on youtube.

https://en.wikipedia.org/wiki/Convergence_(SSL)


It's a shame Convergence was discontinued. I think it was a good solution to the problem.


This article seems to miss the point that a website rarely serves content from a single domain. If it pulls content from 30 different domains, all TLS protected, would the user get 30 popups asking it to allow them each?

The current TLS/PKI allows for much better UX, users would reject the proposed solution.


Someone suggested some sort of metadata that could be passed down to the browser to say this list of domains or such are valid. But if the website is hacked to have injected metadata put in then I kind of think that's ever worse than the current problem. It gives trust to a domain implicitly.


Putting keys from popular websites would increase centralization: content will be moved to Facebook and the likes to avoid the warning.

This TOFU thing could be useful with self-signed certificates (like those shipped in servers' BMC).


This TOFU thing could be useful with self-signed certificates (like those shipped in servers' BMC).

But that's how it already works, at least traditionally (how don't know about Chrome): you connect to it, and the browser asks you if you want to make a permanent exception for that cert. Next time, it'll just connect.


In Chrome, there is no such thing as a permanent exception. In Firefox, the permanent exception includes unexpected certificate change.


> In Firefox, the permanent exception includes unexpected certificate change.

I didn't know this. This is horrible and does not make any sense.


This post makes no sense to me.

How is the proposed solution where MITM attack could be going on the very first time I visit a website I want to trust a better solution than the current state of PKI?

The proposed solution demands that we trust the website on the first visit if the website behaves well. So now it is very easy for Mallory to perform MITM and behave well so that I trust Mallory's public key. Now Mallory can see all my traffic in subsequent visits. How can even one imagine that this is an acceptable solution?


I don't see how this proposal is any different (and even any better) from a small red exclamation mark icon next to the padlock that tells the user "you have never visited this website before".

All your usual websites wouldn't have this icon, but paypa1.com would, and it would be easy for the user to think "wait, what do you mean I've never visited Paypal before?".


The author doesn't understand PKI. Certificates provide a host of features that have grown out of the needs of organizations serving users. Public key only really works for SSH because its use case is so incredibly limited.

Certificate rotation/expiration/invalidation, multiple certificate per domain, multiple subdomain certificates, multiple feature certificates, a dizzying array of certificate extensions, not to mention the organizational control of their issuance across multiple orgs for the same domain, as well as user certificates, are all necessary and all are abandoned by public key. There's a lot more I don't even remember off the top of my head.


There are browser extensions to implement TOFU. Just use that if that's your thing, don't make the rest of us insecure please.


key rotation and signing with the previous key doesn't work for credentials that are compromised. It also potentially fails for devices that miss a whole key rotation, which could happen for devices that aren't regularly connected to the internet but still need access for things like firmware upgrades.


Maybe I'm missing something import here, but isn't that just what we had with self-signed certificates and adding security exceptions in the browser?


It would be progress for the savvy and regression for everyone else, but I still like his thinking here even if it's ultimately a mistake.


If the savvy want this they can already have it, it's just a matter of removing the certs from the browser's CA store.


He added a few flourishes, like network insights, pre-installed certs, and a signature in the header for key rotation.

Might be fun to usability test that scenario in a VM or something just to see how it works in practice.


What a terrible idea. Granted CAs have their problems, but this is actually worse.

Not to mention that this scheme still uses a CA of sorts. Lose-lose.


The Netherlands is an 'evil government'?


Depends on perspective (although I guess that the US would've been a better example). There are many people who consider the Chinese government as much not evil as you do the government of Netherlands.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: