I wish the article had spent more time talking about these things rather than rambling about "politics".
"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."
That would have destroyed any hope of adoption by content providers and probably browsers.
"HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change."
Citation? That said, I'm not considerably shocked that web standards aren't judged based on the impact of whatever computing devices may end up using them on their power grids and those power grids' use of energy sources.
"The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption."
In the same paragraph, the author complains that HTTP/2.0 has no concern for privacy and then that they attempted to force encryption on everybody.
"There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on."
This is so close to "think of the children" that I don't even know how to respond. The listed groups may have restrictions placed on them in certain settings that ensure their communications are monitored. But this doesn't prevent HTTP/2.0 with TLS from existing: there are a variety of other avenues by which their respective higher-ups can monitor the connections of those under their control.
In fact there's no need for this to be tied in with HTTP/2.0 at all. Alternate systems could be designed without regard to HTTP/1.x or HTTP/2.y, they just have to agree on some headers to use and when to set them.
Making these kind of changes as part of a new version of HTTP would just be bloat on an already bloated spec, it is actually a good thing that the spec writers did not touch this!
Not only that, tying cookies with HTTP/2.0 would be a layering violation! The cookie spec is a separate spec that uses HTTP headers, and the cookie spec also explicitly says that cookies can be entirely ignored by the user-agent.
Cookies as they are now already allow you to control persistence. You can edit or delete them however you wish.
Instead of all the servers dumping cookies on you, you send a session-id to them, for instance 127 random bits.
In front of those you send a zero bit, if you are fine with the server tracking you, and you save the random number so you send the same one every time you talk to that server. This works just like a cookie.
If you feel like you want a new session, you can pick a new number and send that instead, and the server will treat that as a new (or just different!) session.
If instead you send a one bit in front of the 127 random bits, you tell the server that once you consider this "session" over, and that you do not want them to track you.
Of course this can be abused, but not nearly as much as cookies are abused today.
But it has the very important property, that all requests get a single fixed size field to replace all the cookies we drag across the net these days.
More critique of cookies: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies
If anything, it possibly reduces my options as a client, in situations where a site previously dropped lots of cookies on me, as now my only options are to completely close my session, or persist all data, when before there were situations where I could maintain, say, my logged in user session, while removing the "is_a_jerk=true" cookie.
That could also just by achieved by disallowing third party cookies though - feels on the cusp of being a browser implementation problem (just stop allowing third party cookies).
(sorry for the repost, this wasn't posted when I posted the other one)
I still have not seen a single proposal from him that is not "forget about crypto", or "let everything be tracked".
He doesn't seems to even understand how basic key exchange work, and all his arguments boil down to "I think they can break it anyway, so we should stop using it".
IMHO, forget about him. he has no proposal and his understanding of security and crypto is downright dangerous.
You could look at section 15 here for instance: http://phk.freebsd.dk/words/httpbis.html
With respect to key-exchange, maybe the problem is that I do understand, and therefore know that there is a difference between privacy and secrecy ?
Some places we want privacy, some places we want secrecy.
Mandating privacy everywhere makes it almost mandatory for police-states to trojan the privacy infrastructure (ie: CA's -- yes they already did), and therefore we have neither privacy nor secrecy anymore.
PS: Dangerous for who ?
PPS": Google "Operation Orchestra" if you don't undstand the previous question.
So the question isn't whether children have privacy, but against whom are they able to be private, in what regard? While I'm generally ok with parents knowing where their kids are, I'm not as ok with a random creep in the neighborhood knowing.
As far as criminals and prisoners, it's far more granular than that. If I get caught shoplifting, does that suddenly mean all my medical records are up for grabs? If I am a prisoner, am I allowed to refuse to see someone who comes to visit me?
Inspecting proxies work well enough with HTTP 1.1, there's no real need to improve them, and even if we do, it should be done via a proxy mechanism, rather than via sending more things in plaintext than we have.
Anyway, HTTPS routers can (and should) terminate SSL, and in that case they can even read the Host from SNI.
But given how large datacenter consumers are considering how to build greener/smarter operations to reduce their impact (and costs) then clearly it's something to be mindful of.
I also wish there were layer between full encryption for content that is intended to be available for http and https (without the extra overhead) for signed content.
why adoption by browsers?
I suspect it would end up looking similar to IPv6: Unless ISPs are providing the majority of users with IPv6, software developers aren't well incentivized to support IPv6 in their software. Similarly to browsers, they'd lose very little for supporting the new standard now, but the gains are low while adoption by the other group is low.
I'm frustrated to read this myth being propagated. We should know better.
In the presence of only passive network attackers, sure, self-signed certs buy you something. But we know that the Internet is chock-full of powerful active attackers. It's not just NSA/GHCQ, but any ISP, including Comcast, Gogo, Starbucks, and a random network set up by a wardriver that your phone happened to auto-connect to. A self-signed cert buys you nothing unless you trust every party in the middle not to alter your traffic .
If you can't know whom you're talking to, the fact that your communications are private to you and that other party is useless.
I totally agree that the CA system has its flaws -- maybe you'll say that it's no better in practice than using self-signed certs, and you might be right -- but my point is that unauthenticated encryption is not useful as a widespread practice on the web.
Browser vendors got this one right.
 Unless you pin the cert, I suppose, and then the only opportunity to MITM you is your first connection to the server. But then either you can never change the cert, which is a non-option, or otherwise users will occasionally have to click through a scary warning like what ssh gives. Users will just click yes, and indeed that's the right thing to do in 99% of cases, but now your encryption scheme is worthless. Also, securing first connections is useful.
Rejecting self-signed certs and only allowing users to use the broken CA PKI model is the wrong choice. Browsers didn't get it right. The CA model is broken, is actually being used to decrypt people's traffic and though your browser might pin a couple big sites, won't protect the rest very well by default. It's a bad hack and we should fix the underlying issue with the PKI. I believe moxie was right, a combination of perspectives + TOFU is the way to do this.
Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.
The scenarios where this works are pretty limited. Pretty much only a server you set up. Jane User has no idea if the first use of ecommercesite.com is actually safe. You generally do because you have other band access to that server to see the key or key fingerprint. Even that can be thwarted by a clever MITM attack.
>Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.
Yeah, that's generally for system administrative access not general public access for the web. For that kind of access, you need higher safeguards, thus the CA system we have today.
> Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.
When was the last time you verified a certificate's fingerprint out of band when connecting to a server for the first time? Maybe you're the kind of person who scrupulously does this, but in my experience even paranoid computer types don't, to say nothing of regular people.
> I believe moxie was right, a combination of perspectives + TOFU is the way to do this.
+1, it's hard to go wrong listening to Moxie.
Today they can grep plaintext as they want. With SSC's they would have to pinpoint what communication they really need to see.
That would be a major and totally free improvement in privacy for everybody.
As for why the browsers so consistently treat SSC's as ebola: I'm pretty sure NSA made that happen -- they would be stupid not to do so.
But the important point is what we'd lose: If a UA accepts a SSC for an arbitrary website, then the NSA can actively MITM a website that uses a CA cert -- the browser will never see that CA cert, so it doesn't know better.
The only way around this would be to accept SSCs but treat them as no different from plain HTTP in the UI. But now websites have no incentive to use the certs in the first place, further limiting the benefits to be had.
Really, just buy a certificate. It's not that hard.
> That would be a major and totally free improvement in privacy for everybody.
Tangentially, I feel like you're trying to have it both ways in arguing for more encryption on the web and also against mandatory encryption on the grounds of energy efficiency. SSCs wouldn't be "totally free" under your model -- we'd be spending carbon on it. I might argue that if one is going to pollute to encrypt their data, it would be unethically wasteful to use a SSC, which comes at exactly the same environmental cost as a CA cert but offers much weaker guarantees.
That, or make HTTP/2's non-secure mode always be encrypted. Which is what people like myself would like. Opportunistic encryption at zero hassle and with zero security problems.
Maybe you're arguing that it would be better to have an insecure but encrypted mode in all browsers? Could be, but I don't think so. As it is, if a site wants the benefits of HTTP/2, they have to establish actually secure communication (inasmuch as CAs provide that). It seems good to me to use the performance benefits of HTTP/2 as a carrot to accomplish real encryption everywhere. If the costs of getting a cert were too high, then maybe this would leave many websites stuck on HTTP/1, which would be worse than having those sites use an encrypted-but-insecure mode in HTTP/2, but I don't think that will be the case.
Pretty sure this is what we're doing today with smtps. We just wrap smtp in tls and call it a day. Its dangerous and allows for mitm attacks. I think there was a paper recently about how this is already being abused.
I just dont see where people who believe in self-signed certs as a solution to all our encryption woes are coming from. Its historically and technically has shown itself to be a security nightmare for most use cases. I think people like this are more political than practical and think they can do non-trivial things without regulation, authorities, etc. Sorry, but that's just not how this world works.
Sure, we could start with cookies. However, that would break a lot of the web with no immediate benefit.
On SSL everywhere (not "anywhere"), how much resources does it cost to negotiate SSL/TLS with every single smartphone in their area? Supposedly, not much. I run https websites on an Atom server.
Frankly, that was rather unconvincing. Although it does seem likely than the entire process was driven by IETF trying to stay politically relevant in the face of SPDY.
If a webserver were to set some secure session identifier, the same laws still apply -- just as installing software without explicit consent is covered by the same law.
Until there is actual legal precedence after people sueing businesses abusing these abilities, I have no idea how to interpret these laws other than "they are very broad and vague".
Also, SSL gets way more complicated when you are using a CDN.
In such a way, your application channel can maintain all the state it needs in that websocket... no cookies needed. The downside of the web today is the lack of a standard display interface/size... you have to work from a phone all the way to a 1080p or larger desktop or big screen display.
These abilities to use the web and enhance things is exactly why the web is as pervasive as it is... if it weren't for such open, broadspread ability we'd all be stuck with a natural monopoly (windows) everywhere.
It's hack upon hack, kluge upon kluge.
Example: Do you think HTML/CSS/JS is the right way to develop a user interface for an application? I don't.
Most of the interesting stuff from HTTP/2.0 comes from the better multiplexing of requests over a single TCP connection. It feels like we would have been better off removing multiplexing from HTTP altogether and then adopt SCTP instead of TCP for the lower transport. Or maybe he had other things in mind.
> There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on.
This argument is quite weak, SSL can easily be MITMed if you control the host, generate custom certs and make all the traffic go trough your regulated proxy.
1) Unless tunneled over UDP (which has its own problems) failed to work with NATs and stateful firewalls.
2) HTTP(s) only environments (e.g. some big corporations) would not work with it; SPDY will look enough like HTTPS to fool most of these.
3) Lack of Windows and OS X support for SCTP (without installing a 3rd party driver) means tunneling over UDP.
Unfortunate, but true.
How is it MITM if you control the host? If you don't trust the host then you are hosed, period- there is no protocol that will save you.
If HTTP/2.0 had done this right, there wouldn't be a need for your employer to trojan your CA list so they can check for "inappropriate content" going either way through their firewall. (Under various laws they may be legally mandated to do so, flight controllers, financial traders etc.)
But because HTTP/2.0 was more about $BIGSITEs unfettered access to their users, no mechanism was provided for such legally mandated M-I-T-M, and therefore the CA-system will be trojaned by even more people, resulting in even less security for these users.
Likewise, pushing a lot of traffic which doesn't really need it onto SSL/TLS will only force NSA and others to trojan the CA-system even harder, otherwise they cannot do the job the law says they should do.
As I've said earlier: Just mindlessly slapping encryption on traffic will not solve political problems but is likely to make them much worse.
See for instance various "Law Enforcement" types calling for laws to ban encryption or Englands existing law that basically allow them to jail you until you decrypt whatever they want you to decrypt (never mind if you actually can or not...)
Edit to add:
The point about $BIGSITE is that everybody hates it when Hotels, ISPs and Phone companies modifies the content to insert adds etc. Rightfully so.
Any proxy which does not faithfully pass content should be forced to need the clients accept for its actions.
But since such proxies are legal and in some cases legally mandated some places (smut filters in libraries & schools, filters in jails. Parental controls at home, "compliance gateways" at companies) trying to make them impossible with the protocol just means that the protocol will be broken.
In fact, the uses of proxies that you seem to be pointing out as examples of why ubiquitous encryption is a "bad thing" - such as those in schools, homes, workplaces, etc. to block "objectionable content" - would probably be better handled by blocking IP addresses or domain names, rather than trying to break into encrypted HTTP sessions, would it not? Last I checked, TLS does not prevent the ability to detect when a user agent attempts to access a particular host (whether by IP address or domain name), thus allowing $BIGSITE to close off access to blacklisted or non-whitelisted hosts without needing to know the exact data being exchanged.
Honestly, and with all due respect, the idea that ubiquitous use of TLS would in any way, shape, or form stifle $BIGSITE's ability to monitor and block attempts to access "objectionable" sites seems absurd when there are plenty of more effective ways to do such things that don't involve a total compromise of privacy or secrecy.
Just don't think you will make such proxies disappear with technical means -- in particular not where they are mandated by law.
I fully agree with you when we're talking about people trying to make money by modifying 3rd party traffic.
But I leave it to the relevant legislatures (and their electorates!) to decide with respect to libraries, schools, prisons, financial traders, spies, police offices and so on.
With respect to HTTP/2 there were two choices:
1) Try to use the protocol to force a particular political agenda through.
2) Make the protocol such that people behind manipulating proxies have notice that this is so, and leave the question of which proxies whould be there to the political systems.
Implementing policy with protocols or standardisation has never worked and it won't work this time either.
At the end of the day: IETF has no army, NSA is part of one.
To that, I disagree. On a practical level, many sites are already encrypted with TLS, especially the $BIGSITEs, so intermediaries that want to MITM their subordinates already must compromise their hosts. No browser is likely to ship an update that would regress on the privacy guarantees of traffic on the wire in this way.
On an ideological level I believe that it is better to err on the side of making information available to those who want it rather than empowering those who wish to censor and monitor access of information. I would also add that the RFCs issued by the IETF have historically been ideologically aligned with free and unimpeded access to information, and have more frequently treated censorship and monitoring as attacks to defend against rather than use cases to fulfill.
Is "right" in your opinion keeping SSL/TLS optional as it is today? Don't we already have the same concerns (legal obligation to MITM HTTPS connections)?
> no mechanism was provided for such legally mandated M-I-T-M
Maybe this is what you meant by doing it "right". How would this look? I'm having trouble imagining how such a mechanism could be securely built into HTTP.
My point is that HTTP/2 didn't even try, because the political agenda for a lot of people were more or less "death to all client side proxies".
They're entitled to that opinion of course, but given that laws in various countries say the exact opposite, the only thing they achieve by not making space for it in the security model is that the security model will be broken.
Can you explain the link from required SSL to this point? I'm not seeing it...don't they already have unfettered access?
In 1995, the process began for HTTP to be replaced by a ground up redesign. The HTTP-NG project went on for several years and failed. I have zero confidence that a ground up protocol that completely replaces major features of the existing protocol used by millions of sites and would require substantial application level changes (e.g. switching from cookies to some other mechanism) would a) get through a standards committee in 10 years and b) get implemented and deployed in a reasonable fashion.
We're far into 'worse is better' territory now. Technical masterpieces are the enemy of the good. It's unlikely HTTP is going to be replaced with a radical redesign anymore than TCP/IP is going to be replaced.
Reading PHK's writings, his big problem with HTTP/2 seems to be that it is not friendly to HTTP routers. So, a consortium of people just approved a protocol that does not address the needs of one's major passion, http routers, and a major design change is desired to support this use case.
I think the only way HTTP is going to be changed in that way is if it is disrupted by some totally new paradigm, that comes from a new application platform/ecosystem, and not as an evolution of the Web. For example, perhaps some kind of Tor/FreeNet style system.
My big problem with HTTP/2 is that it's crap that doesn't solve any of the big problems.
Why did nothing happen since the HTTP1.1 spec? Everyone say around until Google decided to move stuff forward.
> Local governments have no desire to spend resources negotiating SSL/TLS with every single smartphone in their area when things explode, rivers flood, or people are poisoned.
I remember some concerns about performance of TLS five to ten years ago, but these days is anybody really worried about that? I remember seeing some benchmarks (some from Google when they were making HTTPS default, as well as other people) that it hardly adds a percent of extra CPU or memory usage or something like that.
Also, these days HTTPS certificates can be had for similar prices to domains, and hopefully later this year the Let's Encrypt project should mean free high quality certificates are easily available.
With that in mind, forcing HTTPS is pretty much going to be only a good thing.
Load balancer decrypts, looks at headers, decides what to do, re-encrypts, down to the app tier, decrypt, respond encrypted, etc. I'm not saying that is a bad thing, but that's why some people get cranky.
Somewhat unrelated, compressed headers in HTTP 2.0 makes sense if you only think about the browser, it saves 'repeated' information. The problem is the LB has to decrypt them every time anyways, so someone still have to do the work, it just isn't on the wire. Server push on the other hand could be awesome for performance (pre-cache the resources for the next page in a flow) but also has the potential for abuse.
Well I hope you'd be doing that even without HTTP/2...
To address what you've said, increasing the energy consumption of every internet connected device in the world will probably have noticeable effects on the aggregate (more power for the server, more power to cool the server, etc.).
> increasing the energy consumption of every internet connected device in the world will probably have noticeable effects on the aggregate
How would this compare to the energy used by e.g. a single wasteful banner ad campaign? Do you really think that it makes sense to be concerned with the part which is increasingly executed by optimized hardware?
Not true at all. Early HTTP (which became known as HTTP/0.9) was very primitive and very different from what is used today. It was five or six years until HTTP/1.0 emerged, with a format similar to what we have today.
I actually like HTTP/0.9. If you're stuck in some weird programming language without an HTTP/1.1 client (HTTP/1.0 is useless because it lacks Host:, while HTTP/0.9 actually does support shared hosts, just use a fully-qualified URI) you can just open a TCP port to a web server and send a GET request the old fashioned way.
Who says what an authoritarian state is? The reason there are a lot of CAs and some of them are governments is that having rules and policies for what it takes to be a CA is way better than having some random neckbeard at a browser maker deciding he read something in the newspaper yesterday about your country he didn't like, so you can't be a CA.
If you wanted to, you could build a browser that's a fork of Firefox/Chrome, and just doesn't show the padlock when a CA that you believe is under the thumb of an authoritarian state is the signer. However you would then have to exclude all American and British CA's, which would then exclude most SSLd sites, thus making your fork not much different to just deciding to never show any padlock at all and assert that everything is insecure so fuck it, let's (not) go shopping.
OK ..... back here in reality, real browser makers understand that there are more adversaries than governments, and actually SSL was designed to make online shopping safer, not be a tool of revolution. Judged by the "make shopping safer" standard it does a pretty great job. Judged by the "fight repressive regime and save the world" standard, it still does a surprisingly good job - the NSA doesn't seem to like it much at all - but it's unrealistic to expect an internet protocol designed in the mid 90s to do that.
Even if Jesus Christ managed a certificate authority, someone would complain. Everyone - even G-d - has a conflict of interest.
There's clearly some good people working at worthwhile things at Google. My concern is that a lot of those things doesn't end up being pushed by Google. Not only because it might hurt themselves, but because of non-obvious outside influence.
We shouldn't forget that many things we accuse the NSA for like lack of accountability, overzealous collection of data, the undermining of privacy etc. are all things we can expect from a corporation.
My point was that everyone has self interest, if one that's influential and has resources comes up with a proposal that is reasonably transparent and beneficial, it seems self-destructive to reject it out of distrust.
I just think the US government is the best organization in the world in asserting pressure and that Google, even if they really wanted to (which isn't clear), isn't going to end up with an agenda hugely contradictory to the US governments wishes. The US has a long history of using industry for geopolitical goals and the tech industry isn't any different.
If we do end up with a system that is in line with peoples fundamental rights I'll be the first one to commend them for it though.
You're right to say that the PKI doesn't work if you just want to trust any site that shows a padlock in the address bar, but it's useful if you do a little work.
Not it isn't. It still suffers from not respecting name constraints. You can't setup trust for only a list of domains. If i run my own CA for some list of domains, there is no way i can prevent my CA from being able to sign for google.com. Instead people use wildcard certs so they can be delegated responsibility for a subdomain.
If name constraints were implemented more widely, that'd be great. But someone has to write the code, debug it, ship it, etc, and then you have to wait until lots of people have upgraded, etc, and ultimately wildcard certs work well enough.
Without name constraints I assert the system is inherently broken. You cannot limit trust other than yes/no.
> ultimately wildcard certs work well enough.
Well enough is arguable. The problem is that your attack surface grows with each machine rather than having a private key per machine.
I'm no fan of HTTP/2, but this article does not effectively argue against it. Too many bare assertions without any meat to them. And when you fail to mention a major purpose of a protocol (SSL) you dismiss as useless, you lose a lot of credibility.
CA's are trojaned, that's documented over and over by bogus certs in the wild, so in practice you have no authentication when it comes down to it.
Authentication is probably the hardest thing for us, as citizens to get, because all the intelligence agencies of the world will attempt to trojan it.
Secrecy on the other hand, we can have that trivially with self-signed certs, but for some reason browsers treat those as if they were carriers of Ebola.
I'm arguing against making SSL mandatory, because that will force NSA to break it so they can do their work, and then we will have nothing to protect our privacy.
More encryption is not a solution to a political problem: http://queue.acm.org/detail.cfm?id=2508864
It's bizarre to think if the NSA could break TLS they're holding back.
That line of reasoning sounds bizarre to me. It sounds like "don't add a lock to your door, because that will force the criminals to break the lock, and then your door will be unlocked".
Try this one, it's better, but not perfect:
Imagine what happened if some cheap invention turned all buildings into inpenetrable fortresses unless you had a key for the lock.
Now police can not execute a valid judge-sanctioned search warrant.
How long time do you think lawmakers will take to react ?
If the problem is with the analogy, without analogies this time:
> I'm arguing against making SSL mandatory, because that will force NSA to break it so they can do their work, and then we will have nothing to protect our privacy.
Without SSL, our privacy is unprotected, since eavesdroppers can read our traffic. Now add SSL, and eavesdroppers cannot read the traffic. Then NSA breaks it, and eavesdroppers can read our traffic again - we've just circled back to the beginning. We will have nothing to protect our privacy, but we already had nothing to protect our privacy before we added SSL; and in the meantime before the NSA breaks it, we had privacy.
And it assumes that the NSA will be able to break it, and that the NSA is the only attacker which matters.
There are many ways to break SSL, the easiest, cheapest and most in tune with the present progression towards police-states is to legislate key-escrow.
Google "al gore clipper chip" if you don't think that is a real risk.
We've all learned from the failure of SNI and IPv6 to gain widespread adoption. (Thank you windows xp and Android 2.2) HTTP/2 has been designed with the absolute priority of graceful backward compatibility. This creates limits and barriers on what you can do. Transparent and graceful backward compatibility will be essential for adoption.
I agree, HTTP/2 is Better - not perfect. But better is still better.
HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it -- if the browser vendors do (which they are already) and the content providers do (which some of the biggest are already), then its used. Its specifically designed to be compatible with existing intermediate layers (particularly when used with TLS on https connections) so that as long as the endpoints opt in, no one else needs to get involved -- and with one of the biggest content providers also being a browser vendor who is also one of the biggest HTTP/2 proponents...
IPv6 requires support at more different levels (client/server/ISP infrastructure software & routers, ISPs actually deciding to use it when their hardware/software supports it, application software and both client and server ends, etc.) which makes adoption more complex.
> HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it -- if the browser vendors do (which they are already) and the content providers do (which some of the biggest are already), then its used.
I, for one, welcome the HTTP/1.x+2 future of 5-10 years from now. (Obligatory http://xkcd.com/927/ )
What is the basis of this claim? ISTR that SPDY and the first drafts of HTTP/2 were TLS-only, and that some later drafts had provisions which either required or recommended TLS on public connections but supported unencrypted TCP for internal networks, but the current version seems to support TLS and unencrypted TCP equally.
But given that that both some HTTP/2-supporting browssers and much of the server-side software supporting HTTP/2 is open source, and given that all the logic will be implemented and the only change will be allowing it on unencrypted TCP connection, it'll probably be fairly straightforward to anyone who cares much to put the proof of concept of the value of unencrypted HTTP/2 together.
OTOH, the main gain of HTTP/2 seems to be on secure connections, so I'm not sure why one would want unencrypted HTTP/2 over unencrypted HTTP/1.1, and given that no browser seems to have short-term plans to stop supporting HTTP/1.1, there's probably no real use case.
But the protocol supports unencrypted use just fine.
Google didn't want something that would break even a tiny percentage of existing installs.
Honestly, SRV records cover about 90% of the usage I've seen people deploy ZooKeeper or etcd for. I'd love to see them become the standard way of doing such things.
It sounds like you're arguing that SRV records are no slower than A records, which, on it's face, seems reasonable. A DNS request is a DNS request, and aside from a response being too big for UDP and having to switch to TCP, you should get nearly-identical performance.
The part, to me, that looks like a real performance issue is potentially having to double the minimum amount of queries to serve a website. We couldn't possibly switch directly to SRV records; there would have to be an overlap of browsers using both SRV and A records for backwards compatibility.
If we stick with that invariant, then we can say that the first-load cost of a page not using SRV records doubles in the worst case: websites that only have an A record. Now we're looking for a SRV record, not getting it, and falling back to an A record. So, now, all of the normal websites who don't give a crap about SRV records, won't ever use them, pay a performance penalty. A marginal one, sure. but it's there.,
So, overall, their claim seems valid, even if low in severity.
I'd love to hear from someone that has the data, but I can count on one hand the number of times where a loss of IP connectivity has happened where I wish I had SRV records for load balancing. It's usually bad DNS records, or slow/bad DNS propagation, or web servers behind my load balancers went down, or a ton of other things. Your point is still totally valid about being able to more easily load balance across multiple providers, datacenters, what have you... but I'm not convinced it's as much of a problem as you make it out to be.
> […] or web servers behind my load balancers went down […]
I’m getting the impression that you think that even having a load balancer is a natural state of affairs, but it should not be. Getting more performance should be as easy as spinning up an additional server and editing the DNS data; done. Your attitude reminds me of the Unix-haters handbook, describing people who have grown up with Unix and are irreversibly damaged by it: “They regard the writing of shell scripts as a natural act.” (quoted from memory).
As far as load balancers, sure, it'd be great if nobody needed to do anything other than spread requests across a pool of heterogeneous machines... but there's plenty more to be had by using a load balancer in front of web servers, namely intelligent routing. Things you just couldn't possibly figure out, as the browser, from looking at SRV records.
Besides that, I appreciate your thorough and clearly-informed thoughts on my attitude and state of mind when it comes to engineering systems. It definitely elevated this discussion to new heights, to be sure.
Um, no? If a DNS client asks a DNS server for an SRV record, and the DNS server has the A (and AAAA) records for the domain names contained within that SRV record, the DNS server will send those A (and AAAA) records along in the reply in the “ADDITIONAL” section; i.e. not in the “ANSWER” section as a reply to the actual SRV query, but still contained within the same DNS response. So the tiny performance issue for this minor case can be solved for those who need to solve it.
> […] a load balancer [can also be used for] intelligent routing.
Well, yes, SRV records can’t be all things to all people. This is, however, nothing which will affect, I guess, at least 90% of those even today using load balancers. Those needing this extra functionality can perfectly well keep their load balancers or (to call them for what they actually would be) HTTP routers.
These are, however, both minor quibbles (the first of them even has a solution) and should not affect the decision to specify SRV usage in HTTP/2.
(Also, being overly ironic does not help discourse, either.)
So, either you hope the server responds with the A/AAAA records in the additional section, or you have to query for all records on an RR, or further still, do multiple queries. What happens when your SRV records point to CNAMEs? Do most DNS servers that support sending back the A/AAAA records in the ADDITIONAL section also support resolving the CNAMEs before populating the additional section?
There's a few other things, too,, like having to make interesting tradeoffs on TTLs: if you have low enough TTLs to support using DNS as a near-real-time configuration of what web servers to use, what happens when DNS itself breaks? There's some operational pain there, to be sure.
This is all to say: there's clearly a lot of angles to something as simple as using SRV records in lieu of A/AAAA/CNAME records, and we're here, right now, talking about this, all because of the rushed design of HTTP/2.0, which is a protocol unto itself. It's not surprising that a standard that went through so quickly managed to not include something, like SRV records, which have been in a weird state of existence since their inception. To think it would be so simple, so easy, seems incredibly overoptimistic.
Also, there is, by now, a lot of operational experience with both MX records and SRV records, and they are well understood. They are not the wild unknown you make them out to be.
Taking server CPU utilization numbers as an indicator of total power consumption is pretty misguided in this context, and my understanding is that even those are optimized (and will continue to be optimized) to the point where TLS and SPDY have negligible overhead (or, in the case of SPDY, may even result in lower CPU usage).
The difference between you and me, may be that I have spent a lot of time measuring computers power usage doing all sorts of things. You seem to be mostly guessing ?
That's one horrible argument, though. The cost of a text-based protocol on ethernet, over TCP greatly outweighs the cost of the encryption process.
Yes, of course encrypting things will increase computational requirements yet the cost is negligible in comparison to the problem being solved (stopping the trade of personal data).
It's hard for me to associate a privacy champion with these statements.
Having to break stuff into protocol frames or having every byte have compute done on it for encryption foils his optimization. It doesn't mean that his optimization is more important than TCP connection sharing and ubiquitous confidentiality, integrity and authenticity.
All the ones I've looked at are shitty source code.
Because I think that we can do much better than the pile of IT-shit we have produced until now.
I am actively trying to do that, through my own code, through the articles that I write and through the discussions I engage in.
Not everybody has the luxury of doing that -- the kids have to be fed and the mortgage has to be paid and a job is a job -- but those of us who can, have an obligation to try to make the world a better place, by raising quality if IT.
-- Freely licensed
-- Has been ported to FreeBSD
-- Is certainly less of a tangled mess than OpenSSL
I know they've been making a lot of switches lately in terms of what ships with the base OS, but pretty much all of them (that I know of) have been more akin to forgetting someone else's old toys and playing with their own shiny new ones. OpenSSH and PF in particular seem to still be alive and well, and they've been around for quite a long time.
It doesn't make much sense to get rid of cookies alone, not when there are multiple ways of storing stuff in a user's browser, let alone for fingerprinting - http://samy.pl/evercookie/
Getting rid of cookies doesn't really help with privacy at this point and just wait until IPv6 becomes more widespread. Speaking of which, that EU requirement is totally stupid.
The author also makes the mistake of thinking that we need privacy protections only from the NSA or other global threats. That's not true, we also need privacy protections against local threats, such as your friendly local Internet provider, that can snoop in on your traffic and even inject their own content into the web pages served. I've seen this practice several times, especially on open wifi networks. TLS/SSL isn't relevant only for authentication security, but also for ensuring that the content you receive is the content that you asked for. It's also useful for preventing middle-men from seeing your traffic, such as your friendly network admin at the company you're working for.
HTTP/2.0 probably has flaws, but this article is a rant about privacy and I feel that it gets it wrong, as requiring encrypted connections is the thing that I personally like about HTTP/2.0 or SPDY. Having TLS/SSL everywhere would also make it more costly for the likes of NSA to do mass surveillance of user's traffic, so it would have benefits against global threats as well.
You seem confused about cryptography.
Against NSA we only need secrecy, privacy is not required.
Likewise integrity does not require secrecy, but authentication (which doesn't require secrecy either).
You don't think that anybody can figure out what you are doing when you open a TCP connection to queue.acm.org right after they posted a new article, even if that connection is encrypted ? Really ? How stupid do you think NSA is ?
Have you never heard of meta-data collection ?
And if you like your encrypted connections so much, you should review at the certs built into your browser: That's who you trust.
I'll argue that's not materially better than unencrypted HTTP.
(See also Operation Orchestra, I don't think you perceive the scale of what NSA is doing)
Visiting an article doesn't happen only after it was posted. And surely the NSA can certainly figure out ways to track you, but their cost will be higher. Just like with fancy door locks and alarm systems, making it harder for thieves to break in means the probability of it happening drops. Imperfect solutions are still way better than no protections at all (common fallacy nr 1).
All such rants are also ignoring that local threats are much more immediate and relevant than the NSA (common fallacy nr 2).
On trusting the certificate authorities built into my browser, of course, but then again this is a client-side issue, not one that can be fixed by HTTP 2.0 and we do have certificate pinning and even alternatives championed by means of browser add-ons. Against the NSA, nothing is perfect of course, unless you're doing client-side PGP encryption on a machine not connected to the Internet. But then again, that's unrelated to the topic of HTTP/2.0.
With a self-signed cert they would have to do a Man In The Middle attack on you to see your traffic.
They don't have the capacity (or ability! man of their fibertaps are passive) to do that to all the traffic at all the time.
The problem with making a CA-blessed cert a requirement for all or even most of the traffic, is that it forces the NSA to break SSL/TLS or CAs definitively, otherwise they cannot do their job.
Fundamentally this is a political problem, just slapping encryption on traffic will not solve it.
But it can shift the economy of the situation -- but you should think carefully what way you shift it.
Isn't the whole point of pervasive authenticated encryption to prevent the NSA from "doing their job" (at least the spying part of it)?
> But it can shift the economy of the situation -- but you should think carefully what way you shift it.
It shifts more than the economy of the situation. It also forces a shift from passive attacks to active attacks, which are easier to detect and harder to justify. Forcing the attacker to justify their acts has a political effect.
Which essentially means that terrorists and pedophiles are gonna be using encryption to harm the kids and browsers will have to obey to a new wonderful kids-protecting law and add a backdoor.
Instead you will see key-escrow laws or even bans on encryption.
You cannot solve the political problem by applying encryption.
What I love about technology is that it cannot be stopped with lawmaking.
Very few lawmakers have really tried, and few technologies have been worth it in the first place.
The relevant question is probably if technology can be delayed by lawmaking and how long time.
There is no doubt however that policies can be changed, most places it just takes elections but a few places may need a revolution.
Thinking this is a problem you can solve by rolling out SSL or TLS is incredibly naive.
That's a defeatist attitude: "we can't win, so let's not even try".
It's not guaranteed that pervasive authenticated encryption will lead to key-escrow laws or bans on encryption. In fact, the more common authenticated encryption is, the harder is to pass laws against it.
As an example, consider how common encrypted wireless networks are nowadays. A blanket ban on encryption would be opposed by many of the wireless network owners. And that's only one use of encryption.
Key escrow has the extra problem of being both costly and very complex to implement correctly.
In the meantime we should not cripple our protocols, hoping that the NSA will go "aahh shucks!" and close shop, when the law clearly tells them to "Collect everything."
Yes! Finally 99% of users won't be hacked by a default initial plaintext connection! We finally have safe(r) browsing.
", in at least three out of four of the major browsers,"
You had ONE JOB!
Jokes aside, privacy wasn't a consideration in this protocol. Mandatory encryption is really useful for security, but privacy is virtually unaffected. And the cookie thing isn't even needed; every browser today could implement a "click here to block cookies from all requests originating from this website" button.
We need the option to remove encryption. But it should be the opposite of what we currently do, which is to default to plaintext unless you type an extra magic letter into the address (which no user ever understands, and is still potentially insecure). We should be secure by default, but allow non-secure connections if you type an extra letter. Proxies could be handled this way by allowing content providers to explicitly mark content (or domains) as plaintext-accessible.
The problem I fear is as everyone adopts HTTP/2 and HTTP/1.1 becomes obsolete (not syntactically but as a strict protocol) it may no longer be possible to write a quick-and-dirty HTTP implementation. Before I could use a telnet client on a router to test a website; now the router may need an encryption library, binary protocol parser, decompression and multiplexing routines to get a line of text back.
As for the performance side, SPDY is probably not perfect, but it seems to generally improve over current HTTP, even if it uses secure connection. But even if it didn't, using HTTPS seems to add negligible overhead, and compared to the security it gives I think it's well worth it.
HTTP is supposed to have had opportunistic encryption, as per RFC 7258 (Pervasive Monitoring Is an Attack, https://news.ycombinator.com/item?id=7963228), but it looks like the corporate overlords don't really understand why is it at all a problem for the independent one-man projects to acquire and update certificates every year, for every little site.
As per a recent conversation with Ilya Grigorik over at nginxconf, Google's answer to the cost and/or maintenance issues of https --- just use CloudFlare! Because letting one single party do MITM for the entire internet is so sane and secure, right?
FaceBook, Twitter etc. track you all over the internet with their cookies, even if you don't have an account with them, whenever a site puts up one of their icons for you to press "like".
With client controlled session identifies, uses would get to choose if they wanted that.
The reason YC gets by with 22 bytes is probably that they're not trying to turn the details of your life into their product.
I thought that "pointless encryption" was basically the definition of DRM? And the largest video site, traffic-wise (YouTube) is already encrypted.
What principles does he claim the IETF is conceding here?
World Wide Web
Dou Ble U Dou Ble U Dou Ble U
I count three times as many, is this an accent thing?
Ill just keep using HTTP/1.1. It works on my computer.
It's a bit ironic that this story was delivered to many of us (via Hacker News) over SPDY--HTTP/2.0's dominant source of inspiration.
Phk has had issues with the process for quite some time, and I feel that his embitterment about the process has jaded his view of the protocol. Phk on SPDY/HTTP/2.0 http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...
Which he has done: https://www.varnish-cache.org/docs/trunk/phk/http20.html
It's a pet peeve of mine when people just fling this sort of accusation about as if every word-count limited column isn't any good unless it's 20 times longer and basically includes half of Wikipedia transitively. It's a column in a trade magazine. There isn't a place for a detailed technical discussion there, so complaining that there isn't one is complaining about something that can't be fixed.
Besides, cards on the table, I think he's basically correct, cynicism and all here. Sometimes the right answer is to just say no, and failure to say no is not good thing when that is what is called for.
You are highlighting my point. The arguments made here are far more pragmatic.
My qualms are with how his points are made in the original blog post, not with what points he is making; many of which are quite valid.
With that said, the totality of his argument allows the perfect to be the enemy of the good. Which in my opinion is an invariably flawed position.
Results : 889
He's not just some guy who stirs up a bunch of outrage (and zero useful contribution) every year, regardless of what you may think of his arguments.
To me, that's the most fascinating, and entirely relevant (read: not self-serving or self-important) piece of what the author is talking about.
PHK wanted to do the sort of ground up work that would have taken ten years rather than 3.
Instead of spending 10 hypothetical years of their own time, doing the ground up work, they spent a small portion on HTTP/2.0. That portion is, in fact, smaller than the time spent on SPDY overall. So, how much standards-ing style work did they do? How much actual forethought was given besides making SPDY acceptable enough for a draft?
That's my takeaway.
The IETF mailing list archives are open if you'd like to see what 3 years of standardsing looks like. You can also go to the chair's blog (mnot.net) to read the evolution. Or Roy Fielding's presentations on Waka which were one of the early (2002) catalysts among the IETF-orbiters that lead to HTTP/2.
Ultimately Waka was already a decade late, Google had SPDY which had some similarities and was already deployed, and people didn't want to wait another decade to build something no one might adopt. There were aspects of SPDY that needed changing to make it a true protocol standard rather than just a shared library, and a few features that out of rough consensus were changed.
That's how successful standards processes actually work - codifying what's already working, implemented in the field. Implementing a standard no one uses? See Atompub. See XHTML2.
SPDY was the only really viable option for the IETF to choose - it was running at scale and there was knowledge out there about deploying it and it's performance.
Although SPDY was the prototype, HTTP/2 isn't SPDY anymore it's evolved and moved on taking some of the concepts from SPDY and introducing it's own.
Given how long it took HTTP/1.1 to get ratified we suck at ten year standardisation processes.
If the IETF wanted a fresh look at HTTP, they would not have set such a short deadline for submissions.
It was evident from the start that this was about gold-plating & rubber-stamping SPDY, and people saw that and said so, already back then.
HTTP/2 isn't compatible with SPDY any more, but there is no significant difference between them, only a few deck-chairs were arranged differently in HTTP/2
The fact that people spend ages chewing the cud on a "clarification" of HTTP/1.1 has no impact on how long time it would take to define a HTTP/2 protocol.
Quite the contrary, HTTP/2 was a chance to jettison many of the horrors and mistakes that made the HTTP1.1bis effort so maddening.
You can see some of my thinking about what HTTP/2 should have been doing here:
That would be pushing HTTP/2 to irrelevancy the same way HTML5 (for good or for worse) pushed back on XHTML. Sad way to go but if need be ...
The really annoying thing about his rant is the number of assertions he makes without backing them up with data