Hacker News new | past | comments | ask | show | jobs | submit login
Moving forward on improving HTTP's security (w3.org)
489 points by iand on Nov 13, 2013 | hide | past | favorite | 237 comments



This is a dumb idea unless CAs becomes automatic and free or are completely replaced by something better.

The reason why HTTPS isn't used more is because it's a major hassle and it's quite expensive (can easily double the yearly cost for smaller sites).

If using HTTP 2.0 requires buying SSL certificates, the smaller sites currently not using SSL will just be stuck on HTTP 1.1 forever.


The encryption of the transport and the verification of the identity of the server should be more disconnected.

The CA part is only to verify that the machine you are talking to is who it says it is.... in reality all this really verifies is that they filled out a form and there is probably a money trial somewhere leading to a real entity.

But I've never understood why the identity is so tied to the transport security. It would make everyone's life easier if HTTPS could be used WITHOUT identity verification (for 90% of cases this would make the world a better place)

We'd still want to purchase third-party identify verification... and browsers would still present this in the same way ("green padlocks" etc)... but even without verification every connection could be encrypted uniquely, and it would raise the bar considerably for a great number of sniffing-based issues would it not?

EDIT: I guess what I'm saying is a social issue: We've put so much effort into telling everyone that "HTTPS is totally secure", that we've left no room for a system that is "Mostly secure, unless someone has compromised the remote system/network" .... maybe it's too late to teach everyone the difference between "encrypting a letter" and "verifiying that the person you give the letter to is who they say they are"


I'm sitting here still trying to think of a way to prevent MITM attacks if you have no idea who the guy is on the other side... Maybe I need to drink more tea this early in the morning?

I guess if you did something weird like flip the protocol upside down such that all people would have "internet licenses" and enter them into the browser they're using at that moment (or better yet, lets charge each user $50/yr PER BROWSER LICENSE) and it became the sites problem to encrypt to the end users key... One way or another I think you have to verify the identity of at least one side WRT MITM?


So you don't prevent MITM attacks...it's still a step up from cleartext.

All this change is meant to ensure is that all HTTP/2.0 traffic is encrypted, not that it is all authenticated. For authenticated communication, we continue to have what HTTPS is today.

The main issue is retraining people to not think that "https" means "safe". That's something that browsers are already good at, however, because there is already a world of difference between the user experiences of visiting a site with a trusted cert and visiting a site with an untrusted cert.


It's not a meaningful step up from cleartext, because a passive attacker can become an active attacker with just a couple spoofed packets or control of a single DNS resolver.


It is a meaningful step up, because the passive attack is entirely risk free for the attacker, while an active attack carries with it the risk of detection.

The practicality of enormous secret drag-net operations like the NSA has been running would decrease dramatically if TLS had been the norm rather than the exception, even with unverified certificates. You can't opportunistically MITM 100% of connections without somebody noticing.

It is a shame that cleartext connections have to be the out-of-the-box default in every web server. Security should be the default, and I think the CA mess is to blame for that not being the case.

The sane thing to do would be generating a random self-signed certificate if the administrator didn't set up any explicit certificates. That would prevent passive attacks, and can be built on top of with technologies like certificate pinning and Convergence to mitigate active attacks.


This appears to be a comment that seriously suggests removing authentication from TLS as a countermeasure to NSA surveillance.


Not entirely. I kind of agree. I think I even posted a nearly identical comment once.

It would be nice if there were two kinds of connections. Encrypted unauth and encrypted auth. That seems strictly better than offering unencrypted connections. Your browser could still display the "wide open to attack" icon for encrypted unauth if you like.


Why?!

The entire reason people think TLS should have "unauthenticated encryption" (which is in the literature kind of an oxymoron) is that they don't like the SSL CAs.

I don't like them either.

But the SSL CAs are a UI issue. Instead of dinking around with the browser UI to make worthless "unauthenticated encryption" sessions appear different, why not just build a UI for TACK, let people sign their own certificates if they want, but then pin them in browsers so those sites can rely on key continuity instead of nothing.

Five years from now, if TACK pinning becomes the norm, I think it's a safe prediction that basic TLS certificates will be free from multiple vendors. Moreover, the corruption of the CA system will matter less, because unauthorized certificates will violate pins and be detected; the CAs that issue them can be evicted.

While we're at it, why don't we just fix the UI for managing CA roots? It hasn't changed since the mid-1990s.

I am baffled by why anyone would actively promote an insecure cryptosystem as a cure for UI problems, or even as an alternative for some entirely new cryptosystem like MinimaLT.


It's just a matter of what can be done today vs tomorrow vs next year.


All of these things are simply gated on browser vendors. That's the overhead. Why would you push for a new UI for insecure connections when you could instead lobby for TACK?


Of course not! I am suggesting that connections should be encrypted by default, whether the endpoints can be authenticated or not.


That's still a step up. Now you need to be an active attacker, and not just a passive one.

The perfect is the enemy of the better.


The worse is the enemy of the better too.


It's easy to decide to avoid the worse. But deciding on the tradeoff for the better vs the perfect is much harder.


This tradeoff is easy. UI that makes unauthenticated connections easier to accept is a bad idea; UI that makes certificate pinning work better is a good idea. Suggested tradeoff: pursue good idea, avoid bad idea.


> All this change is meant to ensure is that all HTTP/2.0 traffic is encrypted, not that it is all authenticated.

This is a perfect example of <strikethrough>"good enough is the enemy of good"</strikethrough> "not completely broken in every possible way is the enemy of barely good enough" that is so prevalent in web security. If we don't use this chance we have now to secure internet traffic we will continue to be completely vulnerable to rogue WiFi AP like http://www.troyhunt.com/2013/04/the-beginners-guide-to-break... and to companies as well as countries snooping their employers/citizens traffic via huge proxies for years to come.


The "guy on the other side" is the fridge you bought at the store and just installed in your house.

You want to connect to it securely, but the fridge really has no way to prove to you who it is through any kind of third-channel.

Hell, forget about fridges. It's the router you just got at Best Buy.


> I'm sitting here still trying to think of a way to prevent MITM attacks if you have no idea who the guy is on the other side... Maybe I need to drink more tea this early in the morning?

It's not that you don't know who the guy is, you just don't rely on a 3rd party to tell you that. See how SSH fingerprinting works.


SSH keys work exactly like self-signed certificates. On first connection you get the "whoah, this isn't trusted, do you want to proceed" warning, and if you accept, you are not warned in the future unless the key changes.

If browsers would make it easier to "permanently accept" a self-signed certificate (right now it's usually a multi-step process with blood-red warning messages at every step) we'd have the same situation as SSH keys.


HTTPS involves multiple servers that are interchanged regularly and commonly use different keys. The keys change regularly. There'd be no way to know if the key changing is part of regular operation or a man in the middle.


Note that there's a difference between SSH fingerprints and (self-signed) SSL certificates. Multiple servers can easily share a certificate.


It would be unwise to do so. In any case, phasing out old servers and certs for new ones is common practice. Using one cert everywhere (which would mean using one set of keys everywhere, which is a horrible idea) would require more downtime for maintenance of the certificates. It's not gonna happen.


> See how SSH fingerprinting works.

It doesn't, as a MITM prevention technique, for a gullible population. It doesn't even work for non-gullible population that's been trained to always hit "Y" on first connection from a host to a new server... err... I mean first connection to a MITM who then talks to the new server for you.

There are ways to make the situation slightly harder like the extremely unpopular idea of putting SSH host keys in DNS and then securing DNS ... err .. probably securing DNS via a CA type backend.... Well even unsecured DNS holding SSH host keys is better than nothing, or at least it makes people more susceptible because they feel safer, or something like that.


The odds of encountering a MITM attack on your first connection to a new server are low.

If it does happen, then the attacker will have to keep doing it forever, or else you'll get a warning the moment you manage to connect directly to the site without the MITM.

If your first connection is direct, then you're safe from MITM forever. If your first connection is compromised, then at least you'll likely discover that fact quickly.

I think this qualifies as "works".


This seems similar to the logic behind TOR entry guards: https://www.torproject.org/docs/faq#EntryGuards


I think the point is that most users wouldn't notice having to press "Y" again to accept a new fingerprint.


ssh only gives you the "y/n" choice the first time you connect. If you've connected before but the key has changed, it throws up a very nasty warning and does not even give you the choice to continue. You have to manually edit your key file to remove the offending entry if you want to start using the new key.


A fence doesn't stop someone with a ladder, but that doesn't mean fences are a bad idea.


I couldn't agree more.

It is much more secure to visit a site with a self signed certificate than to visit the site over http. And yet, browsers start flashing red when you do that. At the least, they should show red on http, yellow on self signed https, and green on trusted https.


I agree.

One actual use case that could be solved in a better way than today would be login portals where a user have to be logged in to access the Internet.

Today, this is typically solved by issuing a redirect of some kind to the client (in the future, I guess it will receive a 511).

For HTTPS, the choices are a: Dropping the packets, ensuring extra costs in the support organizations when users wonder why their internet doesn't work. b: Doing a MITM and issue a redirect to the login portal that way.

Different operators choose different solutions here. Neither choice is good. Having a way of telling the client, that yes, while the connection is still encrypted, it didn't end up at the place it expected.

Might it be possible to add to TLS so that there are some way of issuing gateway redirect? Perhaps, perhaps not. I've seen precious little action in that area.


Surely there must be a solid reason why endpoint authentication and transport encryption must be extricably combined into one program.

Personally, I would find great use for the authentication function of OpenSSH as a separate program. In other words, a program that does one thing only: it verifies an endpoint is who it says it is.


The short answer is that at the time this stuff was designed, it was assumed that a passive attack could trivially become active instead, so it was assumed that defending against passive attacks wasn't worthwhile.

Newer info about fiber splitters invalidate that assumption.


I recently bought a PositiveSSL certificate for less than $3 per year at gogetssl.com. That's less than one third of what I paid for the domain to use it on. If you can afford a domain, you can afford to put a SSL certificate on it.

Low-cost SSL brands like PositiveSSL and RapidSSL are so cheap nowadays, some registrars hand them out for free if you buy a domain. And they're compatible with every version of IE on Windows XP, unlike those free certs from StartSSL.


The cost usually isn't so much the cost of the cert, it's more the cost of the static IP.


What browser would support HTTP 2.0 but not SNI?


Lots of utilities that aren't conventional "browsers" but talk HTTP.


The question is still completely valid. What tool would support HTTP 2 but not SNI?


I'd not heard of SNI before. Is this something that can be used now?!


Yes ... as long as you don't need to support IE on XP, Android 2.x, or Java 6.

https://en.wikipedia.org/wiki/Server_Name_Indication


> IE on XP, Android 2.x

That's still a lot of devices.


Neither of those browsers supports HTTP/2.0, so that's moot.


> IE on XP

In this case, that's also going away hard next year when Microsoft discontinues support for Windows XP. At that point it'd be really tempting to suggest switching to Firefox or Chrome, both of which do support SNI.


SNI is useful for hosting, but I don't think it helps embedded devices. Is any CA willing to issue me a cert for 192.168.0.1? Wait, don't answer that.


Why do you want to use global CAs for internal services? Wouldn't it be better to use your own CA? I find out that identifying site by it's certfingerprint is much stronger authentication than the fact that it got valid cert. Actually it would be a good idea not to trust any other than company's internal CA for internal services. But as far as I know, bowsers aren't up to this challenge. Maybe AD allows this, but I haven't ever seen any post how to do it.


It'd be more interesting to see if a CA would issue a cert for something.local — sadly, you're probably right to fear the worst…


They will -- but I believe that's to be phased out by 2015 or so.


You can solve this by setting up your own Certificate Authority.


If we'd get to it and get IPv6 up, the business of selling static IPs should become a very unprofitable as there would be a virtually unlimited supply of IPs. Why is this not happening?!?


For the same reason that SSL adoption is currently lower than ideal, for many uses the increased cost (actual cost, and cost of time) is not perceived to be worth it. For many/most uses IPv4 works just fine and non-SSL is just fine.


Don't forget the cost and barrier to entry of setting up the cert and SSL and learning to administer the extra steps well, without introducing more holes through complexity.


IPv6 is free as in 4 billion ^ 4 addresses free.


Presuming you're noting exponentiation with ^, 4 billion ^ 4 billion addresses would mean 128 gigabits per address. IPv6 addresses don't take up half a gigabyte each in any sane encoding.

IPv6 has 128-bit addresses, which works out to about 4 billion ^ 4 addresses, not 4 billion ^ 4 billion addresses.


My original comment said 4 billion ^ 4 addresses, as in 2^128. There is no second "billion" in that line.


What about embedded devices.

Not everything on the web is sitting on a well-known server.


Right - embedded devices, one off toy apps, a lot of internal organization pages, and a lot of hobbiest projects make up a huge part of the "web space". These will all suffer for more reasons than cost of certs - it adds a new hurdle and a barrier for entry.

Think about printers for a moment: now all the printers providing http interfaces need to include a way to install an organizational cert on them (at least for a lot of organizations). That means that there needs to be an out of band step in setup (and maintenance) to add the cert, or a way to do so from an http interface. The later just screams "giant security risk" for a dozen reasons.


I am sure it happens but you should not be exposing your printer to the Internet. That is just asking for trouble. You would not need HTTPS on an internal network.


> You would not need HTTPS on an internal network.

Oh, really?

http://www.washingtonpost.com/world/national-security/nsa-in...


But HTTP 2 requires it, no matter if you need it or not.


No, it doesn't. From the article: “To be clear - we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption.”


The reason for your parent comment (and my initial misunderstanding) was because this post title was submitted as "HTTP 2.0 to be HTTPS only". By the time I refreshed the title was changed, but this is why we need to stop modifying original article titles in order to bait more views.


So you require a cert for personal projects. That doesn't mean a cert that chains to a public trust. You could easily cut your own cert and trust it on whatever device you wish to access the site on


And for e.g. intranet usage the organisation could set up their own internal CA to validate TLS certificates. The root certificate could be distributed in a manner suitable to the organisation. E.g. via Group Policy for Windows clients, or by simply including it in the disk image used for setting up new machines.


Sure, but there are many new (and not-so-new) "internet of things" devices that explicitly _do_ want to be able to connect to the internet - and a great deal of additional value derives from that ability.

I've spent a lot of time recently working out how to securely allow a set of christmas tree lights with an embedded linux controller[1] with wifi connect via OAuth to your Twitter or Facebook account while being controlled from your phone. The lack of workable/affordable ways to have SSL keys on the device that your phone will trust makes life _very_ interesting - and getting the password-equivalent OAuth tokens into the device has been a fun challenge.

[1] Gratuitous self promotion - http://moorescloud.com/ go pre-order one now to justify getting UL certification so we can sell 'em in North America! _Please!_ ;-)


> You would not need HTTPS on an internal network.

This is false. Good security is layered security.


sounds like devices that wouldnt make the switch to http 2 anyway


So your argument is lets create a new version of a protocol, but make it less capable than the version which proceeds it, so that there are very valid use-cases which cannot be solved using the new protocol-version, forcing us to rely on multiple protocol-versions for what should be the same thing?

How on earth do you make such an argument make sense?


Did you mean to send that reply to me?


And what about wildcard certificates?


The good ol' Subdomain-versus-Subfolder debate just gets a bit more expensive on the left side, that's all.

Services that really need an unlimited number of subdomains are a tiny minority, and market prices reflect this. For the time being, someone like WordPress.com can probably afford $60-$100/year for a wildcard certificate. Everyone else just sticks to subfolders like Twitter does.

After all, nobody will be preventing you from running a website. Your priorities and economic circumstances might prevent you from using pretty subdomains, but that's no different from the current reality where short and memorable dot-com domains cost thousands of dollars.


And they're compatible with every version of IE on Windows XP, unlike those free certs from StartSSL.

This matter has nothing to do with the version of IE and everything to do with whether Windows root cert update is turned on.


Domain names at that price should include an SSL certificate.


The first proposal doesn't require you to buy a certificate, see: http://tools.ietf.org/html/draft-nottingham-http2-encryption...

With that http will be encrypted with no certificate check and https will still have the good 'ol check.


The irony is that in that situation http with ssl and a randomly generated cert will be more secure than HTTPS which uses the CA's Cert, hell I'd like the HTTPS to use the CA's cert for identity but use a self-signed cert for actual data transfers.

CA's are a single point of failure for security.


Don't worry, you just don't understand how TLS works :-)

The CA never gets the private key. Instead they get a certificate signing request (CSR), which only contains the public key part. They sign that.

Oh, and then there is perfect forward secrecy, which basically means that even the servers private key is not the one used to encrypt the actual data (after the initial handshaking, and only for suitable cipher suites, subject to downgrade attacks).

Disclaimer: at least, thats how its properly done. Some CAs offer a "send us your cert and we'll sign it", and dumb people who shouldn't be admins use it because it's (slightly) easier to use.

But you got the conclusion right, the notion of CAs is problematic.


"dumb people who shouldn't be admins"

This is what kills CA security. Anyone at a employer with over 5 people in the IT dept probably has someone who can insert a CDROM but has no idea how to set up CA and SSL stuff installing intranet internal servers using https and a self signed cert.

So we're carefully raising a whole generation of users programmed to accept any self signed cert, after all "thats how the benefits website is at work" or "thats how the source code mgmt site is at work". Then they go home, and oddly enough their bank presents a new self signed cert, or at least they think its their bank, and much as they have to click thru 10 times a day at work, they click thru the same popup at home and then enter their uname pword and ...

Paradoxically as a budget weapon its excellent because you probably have good enough physical security at work and frankly its usually not something worth protecting anyway, but it is incredibly annoying so you can bring up at budget meetings that IT can't afford to fix the SSL cert errors on some meaningless server because they can't afford it, etc. Not technically true but J Random MBA managing something he knows nothing about, can't figure it out, so its a great budget weapon. Highly annoying but doesn't really hurt anything.

To fix this you'd need something like an enterprise programers union standard union contract rule that enterprise programmers will never, ever, ship enterprise software that allows a self signed key. Good luck defining enterprise software, I suppose.

And in the spirit of idiot proofing leads to better idiots, requiring no self signed keys means idiots will create their own root and train users to import any root they ever see anytime they see one. Then distribute a non-self signed key signed by the imaginary "Innitech CA services" root. What could possibly go wrong with training users to do that?


For internal websites, be your own CA and distribute the cert via AD (or include it in your OS image, or whatever).


In the spirit of "idiot proofing leads to better idiots" of course that will not happen.

In fairness if you have a heterogeneous network of legacy windows, some macs for real work, legacy blackberry and both real smartphones, distributing it "everywhere" can get kinda hard.


except that the CA or CA hacker can impersonate you, thus it's still one of the multiple single points of failure


Yes, but they can also do so if you use a self-signed certificate, by just self-signing their own. There's no way that's less secure than a CA-signed cert.


As far as I know, self-signed certs have to be approved on a case-by-case basis in most browsers. Thus if a site is hit by MITM, the cert will change and the browser will warn. Of course, that's assuming you've visited the site before and care to pay attention to the warning.


Besides geococcyxc's remark, how are you to know that the first certificate is legitimate? How are you to know that the new certificate after the old one has expired is legitimate?

If you want pinning, there are better solutions: http://patrol.psyced.org/


Care to elaborate? I do not think you will get a warning if the MITM is done with a certificate signed by a valid CA, even if you have approved some self-signed certificate before for that site. At least I have never seen this in any browser.


You'll be protected against NSA-style snoop-everything passive attacks.

CAs will always be able to MITM you. Like I said: "the notion of CAs is problematic."

There are two caveats:

1) certificate pinning: your browser has a hard-coded list of certificates for all major websites (e.g. Chromium: https://code.google.com/p/chromium/codesearch#chromium/src/n... (scroll down!))

2) there are add-ons (ie Certificate Patrol) that warn you when the certificate changes


One service I can recommend is https://www.startssl.com/ (no affiliation, just a customer) who offer free certs to individuals , and cheaper certs to businesses (their prices on wildcard certs and multiple domain certs are they best I've seen online $59).


As an individual, can I just say -- don't ever mess up, lose your key or need to regenerate your certificate before the expiry date.

Live within their means, or it will cost you $25 (because of "revocation costs").

StartCom are pretty awesome, but be aware of potential pitfalls.


I had to revoke my wildcard cert a few weeks ago. You can tell them why you did that. As far I know they decide to charge you on a case by case basis. When I revoked my cert I got an email 3 minutes later saying: "Revoked free of charge".


Not my experience at all.

To quote them:

"Class 1 certificates aren't revoked free because we receive too many requests daily (specially for the Class 1 free certs) and would we have revoked them all, our certificate revocation list (CRL) would have been blown out of every proportion."

In a further back-and-forth, the admin proceeded to tell me how much bandwidth I would cause them (I don't even care about being added to a CRL for a personal domain).

Edit: Sorry, you did say a wildcard cert, which sounds like a paid cert, so would offer more "service" I'm guessing.


Their verification service is annoyingly rigid. Anything other than a phone call to a number listed on a phone bill (and no fair blacking out other numbers on a family plan, for instance) or waiting a couple of weeks for a letter from Israel is rejected, even when the information is easily verified using online government databases[1].

1 - Not an NSA joke, more that "hey, voter registration and property tax rolls are public and online; you could just verify that, no?"


Here's a suggestion:

http:// is encrypted but performs NO certificate check.

https:// is encrypted but performs a certificate check.

Done.


Did you read the linked message?


I did, but didn't pick up that's what was meant. So the accidental tl;dr above was helpful.


As a client, it's easy to be your own CA. Then you can just obtain the remote server certificates you need and, if you trust those endpoints, sign them yourself.

The problem is how to get those certificates and be sure they are the right ones. The problem was already solved in the pre-Internet world: letterhead, signatures and postal mail.

Trustworthy commercial entities could really distribute their certificates in paper form (or maybe on a business card) as an official document. Customers then scan these into their computing devices and, if they choose, sign them.

I doubt that anyone is pushing HTTPS based on the authentication function. It is the need for encryption that is probably the impetus.


the smaller sites currently not using SSL will just be stuck on HTTP 1.1 forever

Is it a problem?


Saying that you can only use this new technology if you are willing to hand out $100 a year to some third party provider which you really don't see the need for is a major hindrance to adoption.

Imagine if every former "free" technology (tcp/ip, email, http, c-compiler, whatever) demanded you pay $100 annually to use it. How many hackers creating things which we now take for granted do you think would have been discouraged from doing just so?

Security is nice, but that doesn't mean it's worth it or required all over, at any cost. De-facto requiring a paid "license" to operate on the internet is not the right way to go.


What are you running your web servers on? How do you connect these web servers to the internet? How do you register your domain?

I would assume that you pay more than $100/y (which is expensive for a domain validated SSL cert, btw) unless you're using a free hosting provider at which point it's not your decision what protocol to use anyways.


I have 10mbps upstream connection. I have a Raspberry pi and a Linux-powered NAS. I have lots of equipment which costs me nothing to use, which I can use to host or create new internet services. And I do.

No, it wont let me run a multi-million user site, but that's not my aim, nor should it be needed to let people new at the game fool around.

Putting the bar higher and higher to just being able to fool around is so utterly the wrong way to go.

I wonder if anyone on this site remembers what is was like to be 8 years old and already being able to program your first program on your TRS-80 or Commodore 64.

No money needing to be spent, no need to seek permission. Just hack. Get immediate, direct feedback. Instantly gratifying. That appraoch gave us a generation of computer-professionals unlike any other. Why are we so eager to put the road-blocks on now?


> I have 10mbps upstream connection. I have a Raspberry pi and a Linux-powered NAS.

that means you pay for your internet connection and for the hardware you run the website on. Why is also paying for a certificate a problem?

I get the hobbyists approach, but especially for hobbyists I think it's better to stay with HTTP/1.1 which, as a plain text protocol is a lot easier to learn than the complicated ugly mess of HTTP/2.0. Also because of the SSL requirement, development will probably never happen over HTTP/2.0 - or do you want to create or even purchase new certificates for all your development projects?

A HTTP/1.1 server is something normal people can implement.

A HTTP/2.0 server is something for others to implement and a pain to debug.

I see HTTP/2.0 as a new transport protocol to transmit what we know as HTTP/1.1. None of the request/response semantics has changed between 1.1 and 2.0 (minus server push, but if you want to support older clients, you'd have to use other techniques anyways).

If you're just running your own little page, nothing is stopping you from using HTTP/1.1. Once your site is big enough to actually benefit significantly from HTTP/2.0, you will have the money for a certificate.

It's the latency for your clients you can shrink with 2.0, but you'd get bigger benefits from moving off hosting on a cable modem than by moving to 2.0. At that point, you'll have other, bigger costs to pay than the certificate.


> that means you pay for your internet connection and for the hardware you run the website on. Why is also paying for a certificate a problem?

Because the internet access and the hardware would have been bought regardless of the activity, for other purposes. (S)he was already paying for it, and used it for other purposes. Running a web site _happens_ to be one of its use, but is not the one goal. The fact that it is free to run a web site means that I can run my website on my laptop.

On the other hand, the certificate would have to be bought _only_ for this, because it is its only use: be able to play the HTTP/2.0 game.


TCP stacks long ago became something for other people to implement and hard to debug. Same with most encryption. There will always be a hobbiest path with http/1 but the biggest sites in the world are building http/2 for their use cases.

The internet is no longer predominantly a hobbiests' playground and hasn't been in some time. Mainstream success leads to this sort of transformation by definition.


Get immediate, direct feedback. Instantly gratifying. That appraoch gave us a generation of computer-professionals unlike any other.

Are you kidding? Just a few decades ago you would need to pay a lot for machine time to be able to use computer. Today you can sit near a cafe with a $200 netbook and have a free internet access. In the 90's a .com domain cost ~$100. Just a few years ago TLS certificate cost close to the hundred, today you can buy one at around $10 per year or sometimes get it for free.

You have 10mbps connection! At your home! How much do you pay for it?


Get over my connection. Some places it's quite common.

The thing is you are both missing the point:

The tech landscape is growing increasingly complex. We shouldn't be adding more obstacles to getting involved than we already have.

That's how horrible Legacy-things get built. We don't need to do that to our internet.


It's only a barrier to entry if you insist on using http 2.0 which is a CHOICE you make. Don't use http 2.0, just like you can choose to use your internet connection for whatever.


But according to this discussion over at reddit, HTTP 2 seems to be made required for "real" sites:

http://www.reddit.com/r/technology/comments/1qj1tz/http_20_t...

So now it's NO choice after all. If you want to run a "real" site, not only must you pay rent for your DNS, you are now also being extorted into paying money to CAs. CAs which can be subverted by the NSA, so they're effectively worthless anyway.

That's a bad move. Internet should be getting cheaper, not more expensive.

This whole HTTP 2.0 affair is turning into a real piece of extremely short-sighted shenanigans. Given W3C's green-light on DRM in HTML, we should start questioning if we want to entrust them with these sort of tasks in the future. They have gone completely off the rails.


You misunderstood. Nobody forces you to use HTTP 2.0, you can use HTTP 1.1 or even 1.0.

And, again, HTTP 2.0 has nothing to do with CA prices.


>You have 10mbps connection! At your home! How much do you pay for it?

I hope you understand it's mbit. If so, I've got 50mbps over here and I pay €50 (that's about $67 USD) per month, which includes 50 TV channels and interactive TV (I'm Dutch, I hope I described it correctly)

The situation (s)he describes is fairly common; I've got a Raspberry Pi and an old laptop running as servers over here, on which I do experiments and host small websites. They've got StartSSL certs, which suck, but they do their job at least. If you put enough effort in the process and not just blindly fill in the forms, you'll get there.


So, it's not a problem for you to use TLS with HTTP 1.1? What's the problem with HTTP 2.0 then?


People who are really interested in programming will find a way, they always have.

Did you think typing source code from a magazine (which they didn't carry in your home town because nobody else was interested) wasn't a roadblock or barrier?

Waiting to have access to the family TV so you could plug in your microcomputer? Or saving up to buy a second-hand 12" TV set so you could use it in the bedroom? Paying $1500 (in inflation adjusted dollars) for a Commodore 64 was a pretty big barrier.

If none of those were barriers for you, then I'm sure your parents would have sprung the cash for an SSL certificate.


You can get free SSL certificates from StartSSL. There are also domain registrars who give you a certificate with domains you buy for no extra cost, and other CAs which are less than $10 a year.

And if none of those float your boat, you can always self sign if your willing to put up with the warning messages.

Your argument seems extremely hyperbolic.


If you think it as something similar to DNS it's not so shocking. Right now we are handing cash to get a usable domain name, or ask some other entity to share a space on their domains. Now you'll also pay for you SSL certificate or ask some other entity to let you share a piece of theirs.


There's nothing in this technology that demands payment. You can distribute your own root certificate to clients, or create a free certification authority (or join http://www.cacert.org/), if you'd like.


Cannot agree more. Besides $100 annually, a CA certificate is harder to manage (considering cloud hosting etc.), and it is not worth doing for many casual sites.

I predict that https only http/2 will be doomed.


I don't think so.

We are also "stuck" on IPv4 which hasn't been a problem.


Domain names will probably be sold in bundles with certificates, driving the price of SSL certificates lower.


SSL seems to conflate 2 ideas, proof of identity and an encryption layer. Would it be possible to have the encryption layer without requiring a third party to handle keys?


These ideas are inextricably linked! If you cannot verify the identity of the other end, you cannot verify that a man in the middle is decrypting, monitoring or altering, and then passing the data on to the real endpoint.


Yes, proposal (A) in the article suggests this.


Unless there's somewhere that will let me do domain-only validation, I'm not interested in it either. Sick of places leaking my information that has been "secured".


Have you tried StartSSL?


Yep, they ask for a bunch of information and have a human verifying it. Ideally I just want a domain-only verification (they used to exist, not so much now) that don't want any of my personal information.


They should just do what ssh does.


Totally agree. While this sounds amazing, the process of maintaing certificates can be a nightmare especially to someone not familiar with the security process. On the other hand, this does present a huge opportunity for a service that would basically manage your companies certificates.


I think complacency and thinking "they don't need it", even if it costs a few extra dollars a year, is the much bigger problem.

Some sites may stick with 1.1 for a while, but my guess is there will be a ton more sites who will be adopting HTTPS because of this.


Why is this the top comment, didn't anybody read the link? Come on, HN!

(hint: in most of the design options there wouldn't be CAs unless you used the https url scheme. See other messages in the linked thread too)


I hate to say it, but the W3C has been making quite a few dumb decisions lately.


Great decision! When Google started to work on SPDY and made it SSL-only, we saw what the future could be: people upgrade to the new protocol for performance, but get better security too. What's not to like! I was really afraid that the standardisation of HTTP/2.0 will break this, but now all seems well after all.

But this is not enough; we also need to work on opportunistic encryption, to be used for sites that do not use SSL today, without any certificates, in a completely transparent fashion that requires no end-user configuration. Such encryption would not be enough to defeat active main in the middle attacks, but it would defeat passive monitoring of non-encrypted communication.

To those complaining about the hassle of SSL: The biggest problem today is the fact that virtual SSL hosting (multiple sites sharing an IP address without sharing the certificate, otherwise known as Server Name Indication, or SNI) is not feasible. As soon as Windows XP (the only major platform that does not support SNI) goes away, SSL will become much easier; especially for hosted services.

That the cost (of certificates) is a problem is a myth. It might have been a problem in the past, but today there are so many CAs to choose from. There are CAs that give away free domain-validated certificates. There are CAs that give away free certificates to open source projects. And there are also companies that sell certificates for a couple of dollars only.

Obtaining certificates is, no doubt, a hassle, but the fact remains that CA-issued certificates is the only practical option to deploy a secure web site today. There are also some issues with latency, but perhaps with HTTP/2.0 (and some possible improvements in TLS 1.3) those are going to be minimised, too.


It is worth noting that Windows XP is EOL April 8, 2014.


Can you really trust the CAs? There have been many cases where a CA was compromised or tricked into signing fraudulent certs, never mind government mandated back doors.


It's all matter of risk assessment, which will depend on what your security requirements. Trust that they won't issue an interception certificate when a government agency asks them (with a warrant)? No.

But, if you choose a well-established CA, I believe that the risk of fraudulent certificate is small enough to be accepted. Besides, how many cases of fraudulent certificates have been there? Very few actually, relatively speaking, when measured against millions of issued certificates every year.

The current arrangements where CAs are able to issue certificates for arbitrary sites without owners' consent is clearly unsatisfactory. Hopefully we'll get key pinning abilities one day to improve the situation.

At the end of the day, security is hard. The arrangement with public CAs is flawed, but we don't yet have a better solution that scales. For small (close) groups, a private CA with manual user pinning should work well enough, when deployed with HTTP Strict Transport Security.


> But this is not enough; we also need to work on opportunistic encryption, to be used for sites that do not use SSL today, without any certificates

That's exactly what this proposes!


No, not as far as I can tell. The linked document has two options for opportunistic encryption, one of which is without server authentication, but that does not mean without certificates. The draft http://tools.ietf.org/html/draft-nottingham-http2-encryption... also states that the certificates would be used, just not necessarily checked.

My issue with the use of certificates is that it would in practice probably mean required manual server-side configuration, which would be a barrier to adoption (even if self-signed certificates are allowed). I would prefer a certificate-less approach that is available by default and for all sites.

The proposal also requires HTTP/2.0 for opportunistic encryption, even though we could probably make it work with HTTP/1.x too.


Well, ignored certs would be functionally near-equivalent to no certs.

If implmentations ended up keeping some meaning for them, you could look at how most ssh server keys are generated - no configuration.


Here's hoping we have a viable, popular alternative to the current (expensive, corrupt..) system of SSL certificate signing long before HTTP(S) 2.0 becomes prevalent..


DANE TLSA is a possible (viable) alternative here.


Recently learned of http://convergence.io/ (talk: http://youtu.be/UawS3_iuHoA ) which sounds quite promising. It's not perfect (private, non routable sites?) but removes the certificate authority oligopoly from the picture without reducing security (AFAICT).


how could that work?


How about the TLD NICs sign your certificates when you register the domain?

Ideally they have already verified your name, company and address, and you have to trust them to some extend anyway, because they are responsible for the name servers


Ehm no thank you, not all of us want to give away such information for personal projects, there are many reasons for people wanting to register domains privately (beside spammers harvesting whois)


If you give your registrar invalid contact information, your domain is subject to deletion by policy. Proxy registration is OK, as long as the contact information works.


At some point you have to give your information whether it's to the registrar or the proxy company. What is wrong with them using the same information to handle the certificates on your behalf ?


AFAIK DNSSec has no additional identity requirements compared to normal DNS.


> and you have to trust them to some extend anyway, because they are responsible for the name servers

So this is not solving the problem, this is moving it elsewhere.


Moxie Marlinspike's Convergence (http://www.convergence.io/) seems the best proposal, at least for the time being.


That seems awfully similar to http://perspectives-project.org/

I don't know which one was first, but I wish they would cooperate to establish a standard protocol for notaries.

The model of notaries that observe SSL certificates from multiple points in the internet seems greatly superior and ultimately more trustworthy than the CA model to me. It's not perfect, but it solves the most common man-in-the-middle scenarios and is potentially extensible to become even more robust.


Perspectives/Convergence really is a great system, but it unfortunately still has several problems:

- it completely leaks your browsing history: you basically ask a notary "what's the certificate you see for kinkyneighbors.com?". Convergence addresses this, though - it requires network-heavy intermediaries for all the browsing of all the people around the world. - it still doesn't solve authenticity: an attacker could very well be controlling all connections arriving at your house, or leaving the target's server, and fool everyone

Convergence/Perspectives should be coupled with certificate pinning, aka storing _really_ trusted authorities (ie verified by hand) on your computer. Guess what ? [Moxie's next project is just that [0]

(For anyone curious, I highly recommend Moxie's talk [1] about Convergence, it does a great job at explaining what's the problem, what's Convergence and how it can solve at least part of it)

[0] http://tack.io/

[1] http://www.youtube.com/watch?v=Z7Wl2FW2TcA


Convergence's "details" page sez:

> Convergence is based on the ideas originally developed by the Perspectives Project at Carnegie Mellon University.


Convergence is a great idea, but, sadly, the project appears to be dead. The last commit to the repo was 2 years ago, and (as far as I know) the Firefox plugin has been broken for a very long time.

We (Qualys) are running several notaries and are part of the default configuration, and we're seeing very little traffic.


Certificate currently has two goals:

- Verification - Encryption

The CA is supposed to verify and say "hey this certificate belongs to this company".

What we need is for anyone to setup their cert without a CA (self-signed) and then the CA provides the verification is companies really want it.

This is what happens when you try and dual purpose something. If the certificate was just about encryption then my assumption is that you wouldn't really need CAs.


Without validation, (public key) encryption is worthless, due to man in the middle attacks.


It's worthless in the face on an active adversary, but it works very well as a countermeasure for passive mass surveillance.

I do think it's a dangerous idea, though. The difference between 'secure' and 'insecure' is (at least partially) understood by most technical and non-technical people, and sometimes they can make a good decision on their requirements. The difference between passive and active adversaries is much more subtle, and I doubt people can think this through with as much clarity.


No, it's not worthless. It /raises the cost/ of an attack, by forcing an adversary to implement a more complicated, expensive MitM attack, instead of simply using passive eavesdropping/packet-sniffing.

And to those bringing up the tired, old rebuttal of this providing "worse" security due to a false sense of protection: that's only relevant if the browser is written idiotically and suggests this is in some way the same security as the fully-authenticated version. They should not be showing a "closed padlock" and changing the address bar color for self-signed SSL!


Placing your public key in a dnssec validated DNS record would be a good way to replace the validation component done by most CA's.


If enrolling your key in a PKI controlled by world governments seems like a good replacement for a bad private PKI, yes, by all means, pursue DNSSEC as an alternative.


Only if the middle man was already doing the attack on your first visit to the website. This is a whole lot better than no encryption at all.


> Only if the middle man was already doing the attack on your first visit to the website.

Keep in mind certificate pinning is a fairly (very!) recent addition to the internet security landscape. Before then MITM more or less completely broke encryption.


> Keep in mind certificate pinning is a fairly (very!) recent > addition to the internet security landscape

As with much technology it is a re-invention of how we used to do things.

Many corporate websites still use client-side certificates to ensure that the client is talking to the correct server.

In the early days of Internet banking, some bank sites used to do the same; I received a cert from my bank on a shiny 'CD-ROM'. Sadly they discontinued that validation along with publishing their PGP key for secure e-mail. A step backwards.


How is it worthless? It protects you against passive monitoring/data retention!


Like I said, it offers no protection against man in the middle attacks. It doesn’t matter whether it protects against some other techniques because you know, the attacker will just simply use the technique it doesn’t protect against. It is really simple as that.


Encryption without verification only protects you from passive attackers, though. Frankly I fail to see the point, since it's not secure enough for sensitive data, but still has the disadvantages (performance, cache busting) of SSL.


This. It's worse then useless because it's the illusion of security.

It's too bad, because some type of web-of-trust mechanism for HTTP would be an incredible idea - it doesn't solve the trust problem entirely, but it would enable users to share their trust profiles amongst or against trusted individuals.


What will this mean for caching proxies? These can be really useful in datacentre environments.

For data that is nominally public anyway, I prefer to be able to stick a caching proxy somewhere, and rely on other means (eg. apt's gpg and hash verification) to ensure integrity.

The article says: "Alternate approaches to proxy caching (such as peer-to-peer caching protocols) may be proposed here or elsewhere, since traditional proxy caching use cases will no longer be met when TLS is in wider use."


You might be confusing forward- and reverse proxies. Transparent forward proxies can now go bugger off. They will not be able to intercept HTTP/2.0 traffic.

Reverse proxies, in front of web applications will need to terminate the SSL before caching. Same as today.


Proxies don't have to be transparent. Non-transparent forward proxies that I set up and choose to use are very handy because I get a direct performance improvement out of them.

As an aside, I hear that in much of the more distant parts of the world (Australia/New Zealand) transparent forward proxies are common amongst consumer ISPs to help with their high latency to the rest of the world.


What's wrong in principle with transparent forward proxying anyway? From almost any perspective other than security/anonymity, forcing a client to make a TCP connection to a publisher's computers every time the client wants to read one of the publisher's documents* is a terrible decision: stark, screaming madness. If transparent forward proxying breaks things with HTTP then that's a problem with HTTP. Even from a security/anonymity point of view, an end-to-end connection is no panacea either: if encrypted it (hopefully) prevents third parties from seeing the content of the data sent, but it also makes damned sure that the publisher gets the client's IP address every time the client GETs a webpage, and as recent events have illustrated publisher endpoints aren't super-trustworthy either.

* Unless the client itself has an older copy which it knows hasn't expired; but a single client is much more unlikely to happen to have one of those than is a proxy which handles requests from many clients, and probably has more storage space to devote to caching too.


If you own the client and the proxy its still possible. Install a controlled root cert on the clients. On the proxy dynamically create & sign certs for each domain the client requests. Present these false certs to the clients connection. On the "north" end the proxy is now responsible for verifying the remote cert chains etc. Proxy has access to all the bytes, programmatic clients succeed in auth, human clients have a green check box in the browser. Totally doable today. I think some of the commercial appliances even do this for you.


Instead of dynamic creation, could you sign a wildcard cert that covers everything?


I suppose so, or maybe *.tld? Im thinking it would depend on your clients behavior. Clients dont go to that many unique fqdns, dynamic creation + caching should quite achievable.


For APT couldn't you just force HTTP v1 to permit caching?

Most clusters of linux boxes I've admined I end up with a dedicated APT proxy on one machine, not a generic http network level proxy. The proxy I use has varied over time but at this moment I'm mostly using the approx package.


I could; I don't like the idea of multiple versions of a standard being necessary, though. Why can't the newest version support all reasonable use cases?


Large companies who install transparent proxies can often find 10%-20% byte hit rate. It is said to see this go away with the move to HTTPS everywhere.

I wonder if transparent proxy caches are a thing of the past now?


I kind of hope so. While they were very convenient, they had some limitations, and had some very confusing failure modes (for example, trying to go to an HTTPS site before accepting the boilerplate TOS agreements).

I think developing some sort of proxy discovery protocol and making it clear to users that their connection is proxied is a much better way forward.


Proxies are not the only thing that is necessary to support - login portals is another related usage in the same general area.

Basically the problem is that the encryption possibilities available assume Internet is a dumb pipe between source and destination, but that is simply not the case.


https: should use CONNECT regardless of version. For http: the proxy will probably decrypt and then re-encrypt which will allow caching and inspection.


With the way Firefox gives that scare-popup on a self-signed cert, mandating SSL would only make people that cannot afford (or cannot get) a signed cert into second-class citizens on the web. Remember that one of the primary benefits to "the internet" is that the all peers are equal as far as the network is concerned, and the barrier to entry for publication is reduced to zero.

Anything that ends up as a barrier to increase that cost only serves the interests of those that wish the internet could be reduced back to "cable tv", with gatekeepers able to regulate what is published while taking in a publication fee as tribute.

I am usually one pushing hard for encryption, but more PKI is not what is needed. The idea above about using DNS to distribute keys is a good idea; I would also suggest simply mandating that self-signed certificates [1] be treated fairly, without the scare-box. Either would still allow somebody to setup a home server with a simple apache/whatever install, no outside approval needed.

[1] - Note that I said "fairly", not "the same as authenticated certificates". Encryption without authentication is still a benefit, and should not be given the popup that scares people away currently. Just don't mark it as "secure" with the closed padlock!


I can't say I agree with this. I am all for ubiquitous encryption, but this smacks of inappropriate mixing of layers. HTTP should not care about the underlying transport.

Someday, TLS will be replaced. I cannot imagine when this will happen, or what the replacement will be like; I am only certain that, given enough time, it will happen. When it does, HTTP should still work without modification. The proposed standard fails that test.


TLS on port 443 is what makes SPDY / HTTP2 deployable. That way the new protocol is hidden from middle boxes that mess with port 80 traffic and would break things if the traffic didn't look like HTTP 1.x.


I'm not surprised, this does simplify things in many ways, and to be honest, there isn't a good reason not to use SSL anyway in many cases.

What we need next is a browser happy way to use HTTPS only for encryption and not for verification (yes, I know!), but it would make this migration much easier. This would reduce the reliance on CAs, and would make SSL certs free in many cases.


What we need is to do away with CAs entirely and use the DNS hierarchy to distribute keys (with DNSSEC for example). That way you only have to trust your own registrar (that you already need to trust anyway) and not any random CA in the world. So if you own example.com you get the COM registrar to sign your key and become your own CA for *.example.com.


Seems like you could accomplish the same thing by CA pinning your site to Verisign (who runs the DNSSEC root).


That and an expensive *.example.com certificate would get you half of the way there. You still wouldn't be able to be your own CA, signing certificates for foo.example.com that other people can use. And it would require everyone to buy certificates from Verisign.

Nowadays national governments already setup their own CAs because they want to be able to issue certificates for all sorts of government organizations. With the setup I'm suggesting Germany would get .de signed, then they could sign gov.de and then have gov.de sign someagency.gov.de. somecompany.de would get their certificate from the DE registrar when they sign up for the domain and also be able to issue somepartner.somecompany.de or jabber.somecomapny.de certificates that if discovered only compromise part of their network, unlike having a wildcard certificate installed on every server.


Doesn't this just mean the registrar ends up (effectively) being the one who trusts all the random CAs instead?


What random CA's? All the registrar does is say "I the owner of COM have delegated EXAMPLE.COM to FOO both by pointing the address to their servers and signing their public key." After that example.com can delegate within its domain both the addressing (regular DNS) and the signing of certificates for subdomains.


Good reasons not to use SSL:

SSL has a significant number of required packet exchanges at startup, so latency for small objects is much larger; this is especially noticeable if there's a high round trip time between the client and server. SSL blocks proxy caching. The CA situation isn't great; especially if you're supporting older mobile clients, which basically never update their root certs. Debugging is harder (and HTTP/2 already gets complaints about that)


> SSL has a significant number of required packet exchanges at startup

This is improving, however – see e.g. http://www.igvita.com/2013/10/24/optimizing-tls-record-size-...

> SSL blocks proxy caching

This is only true for involuntary transparent proxying, which is also a huge source of reliability problems. After seeing how many times JavaScript is broken or images are degraded by "helpful" ISPs I'm quite happy to change the model to something which requires the client to opt-in to enable a proxy.


> This is improving, however – see e.g. http://www.igvita.com/2013/10/24/optimizing-tls-record-size-...

That's interesting (and I'll try to apply it where i'm running SSL), but what I'm more worried about is the very beginning of the connection. http://chimera.labs.oreilly.com/books/1230000000545/ch04.htm... SSL adds two extra round trips, which doubles the latency of a small request. This is really unfortunate when you're in a high latency environment already.


reason: $$


The main issue I have with HTTPS is that it's still not reliably possible to use name based virtual hosting over SSL because SNI isn't supported in some OS/Browser combinations that are still in heavy use (any IE on Windows XP, Android < 3).

This means that we're going to need many more IP addresses in cases where we want to host multiple HTTPs sites. This is a problem because we're running out of IPv4 addresses and IPv6 support within the range of systems not supporting SNI isn't that reliable either.

This might not matter that much in the future, because larger sites should still have enough IPv4 addresses, but it will hurt smaller sites.

In my case, I can't possibly offer SSL for all of our customers (most of them are using their own domain names, so no wildcard certificates) as back when I only got 32 addresses and it's next to impossible (and very, very expensive) to get more nowadays.


Windows XP support ends in April 2014. Android 2.2/2.3 is currently at 28% market share and dropping.

There's a point at where you have to drop support - even if you've got laggards. Look at the stats for your sites, find out the percentage of IE users on Windows XP. A quick sampling of 2 popular sites I'm running shows it to be around 2-3% for IE/XP users - I wouldn't call that heavy use, but obviously it's going to be different for every site.

Charge higher prices for those wanting a dedicated IP for their site - pass on the costs which you'll be facing due to IPv4 exhaustion. Prioritise the more important sites for dedicated IPs. If the site generates revenue - keep it on a dedicated IP for a bit longer, if not then it can share an IP with SNI.


I'm using SAN certificates (multiple domains on single cert) for that purpose. It allows me to put up to 100 domains on a single IP/certificate, which is fine for a low-traffic SaaS service. Premium clients can still get their own IP and certificate.

Globalsign has even developed a special service for those cases (https://www.globalsign.com/cloud/).


IE on Windows XP and Android < 3 also do not support HTTP 2.0, so this would not affect them.


Browsers that don't implemenet SNI also don't implement HTTP 2.0. In fact, the upgrade to HTTP 2.0 on existing ports requires SNI. It is impossible to implement HTTP 2.0 with SNI.


Uh, "with"?


Doh, "without SNI".


HTTP 2.0 will still be defined for unencrypted operation, for deployment on Intranets. This default is for the open web.

Since a number of people don't seem to be reading TFA, I'll help:

"To be clear - we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption. However, for the common case -- browsing the open Web -- you'll need to use https:// URIs and if you want to use the newest version of HTTP."


That's not what the proposals A and B say - only C says http:// would be cleartext. Also the followup message(s) say the poster misrepresents the C option presented at the meeting.

See http://lists.w3.org/Archives/Public/ietf-http-wg/2013OctDec/...


Won't this create the ability for a 3rd party to disable your site by having the CA revoked against your will? For example, in regards to a copyright issue?

Personally, I am against this, as I feel that this is a choice that should be made on a site by site basis.


Yes. They can already do this more reliably by disabling your domain name, though (not to mention your hosting).

Edit (T+16 minutes):

I said "more reliably" because not all TLS clients perform CRL or OCSP checks -- as mentioned in other comments -- so it's not 100% effective. Web browsers probably mostly all do, though. Certainly enough of them do that running a website with a revoked cert is impractical.

As for disabling your domain name, if you don't know, that really does happen. US law enforcement seizes domains on US TLDs (such as .com) all the time. Edit (T+23 minutes): ... and registrars have been known to cave to strongly-worded letters from civilians, too (see e.g. Go Daddy, MySpace and seclists.org several years ago).


I'm 90% sure you can't disable someones SSL certificate once they have it. Which is why most of the time it takes a bit to actually get it.


OCSP and CRLs allow a CA to revoke an SSL certificate (albeit with a varying degree of browser support).


This made me sigh.

Don't get me wrong... As a lifelong time security guy, I'm happy to see more encryption. But implementing more security at one layer adversely impacts security at other layers. (e.g. IDS)

We're really bad (as a species) at unintended consequences....


IDS is itself an unintended consequence of a) app layer bugs enabling the intrusion, and b) opportunistic mass malware that doesn't even consider itself worthy of using TLS.

Personally, I'd rather make life harder for the pervasive eavesdroppers and the semi-skilled attackers.


Isn't SSL often terminated at some network equipment in front of the real webserver? An IDS can still work behind that..


He's probably talking about client-side IDS, such as in a corporate environment.

It's worth noting that in such an environment, he likely controls the client machines themselves (ie, only corporate machines on the corporate network), so it's straightforward to just push out a trusted Certificate Authority and intercept anyways.


As a security novice could you expand?


Snarky answer: He wants to be able to spy on his users in order to protect them from themselves.


Also want more information about what you said.


IDS was only one example.

When technology evolves, we tend to break the things that we used to work around the limitations in the previous technologies.

There's a whole suite of technologies in security that rely on the idea that we can look at packets as they travel to figure out if anything malicious is going on - Intrusion Detection Systems, Data Loss/Leak Prevention, Deep Packet Inspection Firewalls, Web Content Filters, etc. Each of those systems relies on the ability to see the unencrypted traffic - to "spy" on users, as someone else so snarkily put it.

As SSL has become more prevalent, we have turned to (as someone else pointed out) terminating the encrypted traffic once it's on a "trusted" network so we can do that - but, if HTTP/2 is ONLY over SSL, there will be no "termination" - it will be encrypted from one end to the other.

That means that all of the traditional security technologies will be completely blind to anything that happens in that communication stream.

I wasn't bemoaning progress - I think this is a good step. But it's also a step toward a temporary lack of security as the organizations catch up. It's because the security industry is a trailing industry (by definition) - you can't build a product that fixes security issues until you know: a) what the issues are, and b) how to fix them.

So, for a while, the early adopters of HTTP/2 are going to fly without a net somewhat.

(FYI - this same set of discussion points applies to IPv6 adoption)


And what about WebSockets? I know it's a different specification, but we should be using wss:// too.


Wow, are they going about this entirely wrong?

You don't screw around with the standard to try to drive adoption of encryption. You should solve a user interface problem by improving the user interface, right?

It's also not about getting an SSL cert. If you're doing anything interesting at all, you need an SSL cert, even if only for some percentage of your population. You also do need decent ways for people to distribute keys to their devices.

But at the end of the day, it's the green light/red light which is going to drive user adoption. The browser which capitalizes on privacy features and presents them best is a huge winner over the next 5 years.

Ultimately people should get the level of security they ask for. I don't think the spec should be catering to users who don't even know that http must die and https is the only way forward. Nothing could be more obviously true.

What's not obvious is the adoption rate once HTTP2 is baked. What the spec should be contemplating is how they can get the best roll out. There are so many awesome features we want to start being able to rely on, but if the new stack isn't pervasive, some people will think it's hard to justify coding for it.


It would be really sad if we have a new standard gaining momentum and forcing people to stay supporting broken SSL/TLS.

I hope they mandate using the newest TLS with the beast mitigation etc and also mandate perfect forward secrecy.

Generally, the whole CA approach also needs a rethink, but its less straightforward to trot out solutions to that problem. Hopefully Moxie and other experts will weigh in.


Why shouldn't we be (optionally) cutting CAs out of this? They're already known to be a weak link in the chain.

How about they make it an option to put your own certificate chain in your DNS records, require DNSSEC and use pinning to cover the fact that the DNS server might get intermittently MITM'd?


Now we are coupling data transfer with encryption, which we couple with validation, I thought people would learn from SOAP that this is a horrible idea.

I understand messing around with SSL certificates is no issue for the likes of Google, but for the little guys, it's simply a lot of extra costs and work.

I don't trust CAs and I think we should just use an approach like DKIM for SSL.

I might want the performance benefits of HTTP 2.0 but might not care about security.


Wasn't this always part of the standard? What's new this time around?

<I'll save my rant for why I hate HTTP 2 for another time>


What changes does HTTP/2.0 bring? What changes would it make that would want me to switch.

The one I would love to see solved is the state problem, cookies are not the best way to solve that problem. If there was a standard way to accomplish that within HTTP without all the mess that is Set-Cookie and the domain rules and all that fun stuff I would be very happy.


Many small sites doesnt use certs like personal sites etc..why should i care as a blogger to make my site HTTPs..its cruel..


Wasn't HTTP 2.0 a complete failure? I thought it was already obseleted.


By what?

HTTP 2.0 is the standardized version of SPDY.


From what I understood HTTP 2.0 was designed by a committee that failed to come up with a usable standard. That is why everybody still uses HTTP 1. I thought that 2.0 was skipped over in favor of another standard but I could be wrong.


IETF is a rather chaotic process, probably the closest thing to "anti-committee" in the standards world. The working group chair, Mark Nottingham, is a pretty respected guy in the web world.

If you use Chrome or Firefox as a browser, you're probably already using SPDY as many larger servers (Google, Wordpress, Twitter, a bit of Facebook) support it.


The cost to have someone fix absolute/non-absolute href links will far exceed the cost of SSL or an extra IP. So expect to see a broken landscape of websites post implementation.


Why would links suddenly break? It's not like HTTP/1 will ever go away.


Non-ssl rendering of local hrefs is lenient. Browsers don't complain. With forced ssl, however, badly constructed hrefs will force the browser to complain that a SSL enabled website has non-ssl hrefs.

Most people don't understand that a href needs to be absolute or how to easily fix it.


you're presuming that a web application would serve a page up with HTTP/2 and not test their embedded hrefs for the user experience?

more likely legacy pages will remain HTTP/1


Fun fact: Cert-free SSL used to work with some browsers, at least Mozilla, but then they disabled the anonymous SSL modes from the browser. I was using it and was saddened...


Very good idea - will make snooping and tracking impossible. So end of targeted ads. If we can now snooze the surveillance regime as well - all the better.


So will this prevent NSA et al. spying? Or have we reached the stage where they have all of the relevant certs already embedded into their ISP black boxes?


Prevent? Not really. Make it much harder and expensive for them? Yes. But after we do this, the next target is definitely fixing the CA system so it's trustless, because we can be sure the CA's will be NSA's next targets, legally or illegally.


I'm curious on how you envision a trustless security system to work. Any links?


Why is HTTP enveloping SSL? Shouldn't the transport layer be separate from the Application layer?


PFS = Perfect Forward Security, wait, what? It's not perfect nor it's security. Great post.


Sounds like just unneeded overhead for some situations


Secure is good, but SSL is not the right protocol.


What would you recommend instead?


NSA


I guess you haven't used IPsec, SSL is a lot better option.


With Chrome rolling out AES GCM (and other browsers likely to follow suit) - this seems to be a really plausible future state of the world.


this + IX would be perfect




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: