Hacker News new | past | comments | ask | show | jobs | submit login
HTTP 308 Incompetence Expected (insanecoding.blogspot.com)
234 points by aw3c2 on Feb 16, 2014 | hide | past | favorite | 43 comments

Can anyone here provide any explanation of how the changes in HTTP2 might not be idiotic?

The changes discussed in the post just seem dumb to me, but I assume there has to be some reasoning behind them.

Alright, think of it this way.

People use HTTP for two things these days:

1. Its originally-stated purpose--an application-layer protocol that allows web browsers to retrive hypermedia documents from web servers. In this usage, HTTP replaced Gopher+FTP.

2. A transport-layer protocol, with features such as identified sub-flows (requests in a pipelined keepalive connection, websockets, etc.), several usefully-different varieties of caching, protocol feature autonegotiation, presentation-layer autonegotiation (Accept headers), automatic redirection semantics, optional encryption (TLS+HSTS+CORS = probably the most well-thought-out security-boundary semantics of any protocol we've got), etc. In this usage, HTTP basically supercedes TCP.

There's a vicious circle here: as HTTP gains traction in sense #2, businesses become increasingly unwilling to allow anything other than HTTP-in-sense-#2 through their firewalls. Eventually, HTTP may be the only transport-layer protocol.

And, given that, on a stance of complete pragmatism where we can't prevent this from happening, only try to make the best of the situation... we need an HTTP-in-sense-#2 that can actually support being used as a universal transport-layer protocol for all of the Internet's traffic.

What does that mean? Well, it means, for one thing, making HTTP lower-overhead (i.e. binary.) It means making HTTP not only work in situations we'd previously have used TCP (e.g. websockets), but also situations where we'd have used UDP (e.g. VoIP streaming.) It means, well, doing pretty much everything HTTP2 does.

Note, though, that HTTP in sense #1 will likely always be around. Nobody who uses HTTP to transfer hypermedia documents between web browsers and web servers needs to switch to HTTP2. HTTP2 can do that, but it isn't for that. (Though, practically, doing HTTP-type stuff over HTTP2 will likely be both faster and more secure.)

> Eventually, HTTP may be the only transport-layer protocol.

Worse is better strikes again!

No, seriously; this is terrifying.

That it's terrifying is true, but it's always been happening, and HTTP is just the next step in it. See here (the comments especially):


None of that explains the changes to redirect codes, which are the main point of the article. Those are what sound truly brain dead.

No, but it clearly state the flaw of HTTP2.0: a broken by design replacement of all possible TCP/IP layers on top of a subtly broken in some places stack.

Would you trust this unstable "pragmatic" house of card to deliver its promises of being "working"?

The author of this article is wrong assuming 301 is used as it is defined by existing standards.

HTTP2 redefines 301 to match current practice (permanent redirect that changes from POST to GET), and defines a new 308 to try again to get a permanent redirect that doesn't change the method.

I think the reasoning is "Embrace, Extend, Extinguish"

"Veni, vidi, vici."

While browsers can probably "do anything" with a 301 or 302, I think in practice it's simpler.

I think the issue here is that 301 and 302 were originally intended to preserve the HTTP method but they became permanent and temporary versions of "issue a new request with a GET". So to try and fix that they provided 307 (and now 308) as temporary and permanent versions of "this resource changed location, so reissue this request at the new URL".

I actually wrote a post about this a couple of days before RFC 2616 got marked for official deprecation: https://aprescott.com/posts/http-redirects

I plan on updating that with more information once a proper RFC deprecates 2616 and 308 makes its way into something other than a referenced alternative, as it is in the current draft last time I checked.

Also, for fun, try pointing curl at a server returning various response codes and see what it does with `-X [method]` and compare it with the latest Chrome and Firefox.

I thought OP said the situation with 301, while not perfect, is much much better than 302. Even if clean-slate 307 and 308 codes is a great idea, I think OP is concerned that redefining 301 to be excessively permissive will make things worse not better -- in that 301 will go from fairly reliable to being as bad as 302.

302 is already bad, and 301 is already wrongly implement as RFC 2616 itself notes. Ultimately I doubt either is going to see a "fixed" implementation from a buggy one that's lasted as long as it has.

That doesn't answer the question, namely, "will loosening the definition for 301 make things worse than not changing it?". In this thread I've not seen a solid argument that it won't make things worse, only long redirections (ha!) from the point. Are you saying that 301 is so widely, badly implemented as to be a lost cause, as the author concedes 302 is?

I don't know enough about client implementations to say whether it is unsaveable, but let's say it is: why would 308 exist? That suggests the HTTP spec folks believe it's fundamentally changed meaning. With new-301, new-302/303, 307 and 308, we'd cover each of the 4 cases. Seems to strongly suggested that's necessary.

You can't just "let's say" 301 is unsaveable, because that's the only outside fact that determines whether TFA's argument is correct. You seem to be assuming that the committee's decision is evidence that it's a good idea, which is exactly what TFA is trying to determine.

Meanwhile, hobohacker got around to answering the real question.

I think the author of this blogpost has a few things off:

- HTTP2 != httpbis. Both work is being done by the same working group "httpbis". http://datatracker.ietf.org/wg/httpbis/charter/ covers this. httpbis (http://stackoverflow.com/questions/9105639/httpbis-what-does...) was originally chartered to revise HTTP/1.1 (RFC2616) The working group will refine RFC2616 to: * Incorporate errata and updates (e.g., references, IANA registries, ABNF) * Fix editorial problems which have led to misunderstandings of the specification * Clarify conformance requirements * Remove known ambiguities where they affect interoperability * Clarify existing methods of extensibility * Remove or deprecate those features that are not widely implemented and also unduly affect interoperability * Where necessary, add implementation advice * Document the security properties of HTTP and its associated mechanisms (e.g., Basic and Digest authentication, cookies, TLS) for common applications

As for the HTTP/2 work, here's a snippet from the charter on that: The Working Group will produce a specification of a new expression of HTTP's current semantics in ordered, bi-directional streams. As with HTTP/1.x, the primary target transport is TCP, but it should be possible to use other transports.

- He seems to think the httpbis folks gratuitously redefined 301. It should be noted that RFC2616 (which, by definition, predates the httpbis work since httpbis is defined to revise RFC2616) had already noted the issue with 301 (http://tools.ietf.org/html/rfc2616#section-10.3.2): Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request.

- It's unclear to me whether or not the author acknowledges the existence of buggy implementations as noted in section 10.3.2. It's an open question as to what to do in the presence of buggy implementations. From a server standpoint, if the client is buggy, and you don't want to break the client (willingness to break clients probably depends on how many of the server's users use that client), then you will attempt to work around it, irrespective of what the standard says. Therefore, it's simply pragmatic to ignore the spec if it doesn't mirror reality, and pragmatic spec editors may update the spec to acknowledge this difference.

- As far as current status of the various 308 usages, Julian (author of the 308 draft) is lobbying major user agents to adopt this, and has written up a status update on the Chromium bug tracker: https://code.google.com/p/chromium/issues/detail?id=109012#c....

It sounds like you're well-marinated in standards bodies, and that's a good thing -- it's tough, often thankless work that someone needs to do.

For the rest of us, the language from OP sure sounds like it's saying, "yeah do whatevs with 301, we give up".

People often read RFCs in a hurry. Wouldn't this be a great place to use the a "SHOULD NOT" (change the request method)?

If you're saying "MUST NOT" would be bad because the horse is out of the barn, I understand. But the draft language now sure sounds like "MAY", and the OP has a good point that it's likely to encourage more wrong behavior, not less.

At least IMHO. Again, I am not a standards lawyer, so please take this feedback accordingly.

I guess I should out myself as a Chromium HTTP stack maintainer (since 2009, so this behavior predates me). One might consider me a domain expert here. I participate in IETF HTTPbis for the HTTP/2 work as the primary Chromium representative. I am not involved with the RFC 2616 revision work as that's tough, thankless work, that thank god we have Julian Reschke and Roy Fielding working on. As far as I'm concerned, I owe them a drink everytime I see them. They do an awful lot of legwork talking to various implementations and trying to build consensus on actually conforming with the standard and all its edge cases. It's really quite unfortunate to see this blog post author treat them so unfairly, although I can see how one might easily jump to his conclusion.

Now, as far as "SHOULD NOT", that's a reasonable thought for people not aware of what popular user agents currently do. The thing is, the majority of major browsers rewrite POST to GET on a 301. Here's my browser's code for it: https://code.google.com/p/chromium/codesearch#chromium/src/n.... Here's Firefox's code for it: http://mxr.mozilla.org/mozilla-central/source/netwerk/protoc.... To my knowledge, all browsers implement this behavior. We basically copied IE's behavior, because, IE did it and websites expected all user agents to do what IE did. Story of the web, sound familiar? :P

So, as you can see, Julian was merely acknowledging the pragmatic reality of the situation when he updated the httpbis specs to reflect this behavior: http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1428. And this reasoning behind it is covered in the introduction to the relevant section in the httpbis docs: http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-2....

"SHOULD NOT" implies that our implementations are behaving badly. Now, it's true, our implementations may not be behaving ideally from a spec cleanliness point of view, but interop trumps spec cleanliness, at least from the perspective of anyone who actually deploys real software on the internet. So it's probably best for the spec to acknowledge this and officially allow this. Specs that don't mirror reality are...probably not just useless, but actively harmful.

I hope that explains things, cheers.

You're entirely right, and it's not just browsers. Your statement about 'user agents' was the better one. Consider also curl (http://curl.haxx.se/docs/manpage.html#-L) (and presumably libcurl), Python's requests (https://github.com/kennethreitz/requests/blob/master/request...) library, Python's urllib.request (http://docs.python.org/3.3/library/urllib.request.html#urlli...) (what used to be urllib2), the list goes on. I don't believe I've ever seen a HTTP client library that doesn't do 301 method conversion.

So basically the entire premise of the article, that

> existing practice today is that 301, 303, and 307 are used correctly pretty much everywhere

is flat out wrong. I guess that's a nicer and more probable resolution than "standards people suck!".

From the rest of the article, I can only assume it means 'correctly' in the sense of the reality arrived at by both user agents and servers after years of standards-flouting, which is now 'correct' if unspecified behavior.

Makes sense. Clears up the table at the end too. In reality, as with HTML5, such a change matches historical and current browser behavior while trying to offer a future behavior that differs. The pragmatic approach has been and will likely continue to be the use of ?method=post at least until we get better browsers adopted across the board...

The fact that browsers are buggy does not allow to list that as expected behavior. It's like "Some cars are made of paper and everyone dies in any collision >20km/h, but let's make this expected behavior" or "some chefs use cyanide instead of salt, therefore force salt manufacturers to add cyanide to all salt". These days it's not like deploying new version of browser to user requires major update to operating system, which would take 5 years to cover 90% of users.

I recently attempted to redirect a POST request being made by Apple's NSURLConnection. I say "attempted" because I could not find any status code that didn't make it revert to a GET for the subsequent request. 301, 302, 303, and even 307 didn't work. I finally ended up telling Apache to proxy the request to the real URL rather than trying to redirect the client.

(It is possible to override this behavior in the client with a bit of code, but I was trying to make this work with software that had already shipped.)

Side note: don’t use serif fonts such as Georgia with small characters. I changed the font to Helvetica and the readability was much better.

Both the font size and colour scheme on that page hurt my eyes.

On the plus side, it gave you something to complain about.

Pointing out flaws is a perfectly valid and useful activity, especially when there is significant room for improvement, like in this case.

When it's not clear that someone who can do something about it is even present, it's usually just an "in" to post a low-value comment. Zap all the thousands of aggregator comments complaining about fonts/background-colours and nothing of value would be lost.

> Zap all the thousands of aggregator comments complaining about fonts/background-colours and nothing of value would be lost.

Sure, if you don't have a visual impairment. I do, and even though it's a very mild one, if one in 100 blog posts submitted to HN consider changing their colour scheme due to people complaining about it I would say it is worth it. And frequently I do see people changing their colour schemes based on HN feedback.

The point is: take it to the author, not those standing around you on the bus.

Nonsense. The message is a good one for everyone to hear. It's not a mistake specific to one specific author. It's a mistake commonly made by lots of people, and so the more awareness there is of how to avoid the mistake, the better.

I simply disagree. Internet commenters in general are on red alert looking for any little thing to complain about. So many of these complaints are not worth clogging the tubes with or necessarily even valid at all.

(depth of this thread and meta-meta-meta acknowledged and ceased)

It is still information, which can be useful. For example, I am working on a redesign of a website where they are insisting on making the text unreadably small because "it looks like geocities" if they text is normal. If these kinds of people repeatedly see people complaining about small text, they will eventually get a clue and the web will have fewer unreadable sites.

Just add this custom css to the page and it should be much more legible:

body { background: white; color: black; line-height: 27px; }

#outer-wrapper { font-family: 'Lucida Grande'; font-size: 11pt; }

That will only work on Mac OS X. Just use "font-family: sans-serif", it'll look fine on any system.

Safari Reader mode to the rescue...

Tell the author. This is an aggregator.

The OP might be the author and thus will see this comment. In any case, it can be useful for other people.

Yeah, if internet commenting has taught me anything, it's that concretely fixing things is for the birds.

One quibble: in re "So now you can use a new status code which older browsers won't know what to do with", I feel pretty confident in saying that "older browsers" won't be talking HTTP2...

I feel pretty confident in saying that at least some poorly-written web apps will end up sending new status codes to user agents that are not expecting them.

I also feel pretty confident in saying that it will be the browsers that will be perceived to be "in the wrong" when they reject or ignore such responses, even though it's solely the fault of bad web apps. (See how browsers today need to accept and try to correctly handle totally broken and incorrect markup, for instance.)

side note: reminded me of some of these 7XX HTTP Status codes - Developer Errors https://github.com/joho/7XX-rfc

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact