Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. Please DO NOT mess with protocols, especially legacy critical protocols based on in-band signaling.

HTTP/1.1 was regrettably but irreversibly designed with security-critical parser alignment requirements. If two implementations disagree on whether `A:B\nC:D` contains a value for C, you can build a request smuggling gadget, leading to significant attacks. We live in a post-Postel world, only ever generate and accept CRLF in protocols that specify it, however legacy and nonsensical it might be.

(I am a massive, massive SQLite fan, but this is giving me pause about using other software by the same author, at least when networks are involved.)






HTTP is saved here because headers aren't allowed to contain control characters. A server that is strict enough to only recognize CRLF will hopefully also be strict enough to reject requests that contain invalid characters.

The situation is different with SMTP, see https://www.postfix.org/smtp-smuggling.html


Hopefully is not a good word to see in a argument that a software proposal is secure.

Myself, I've written an HTTP server that is strict enough to only recognize CRLF, because recognizing bare CR or LF would require more code†, but it doesn't reject requests that contain invalid characters. It wouldn't open a request-header-smuggling hole in my case because it doesn't have any proxy functionality.

One server is a small sample size, and I don't remember what the other HTTP servers I've written do in this case.

______

http://canonical.org/~kragen/sw/dev3/httpdito-readme http://canonical.org/~kragen/sw/dev3/server.s


This would be more persuasive if HTTP servers didn't already widely accept bare 0ah line termination. What's the first major public web site you can find that doesn't?

Going down a list of top websites, these URLs respond with HTTP 200 (possibly after redirections) when sent an ordinary HTTP/1.1 GET request with 0D0A line endings, but respond with HTTP 400 when sent the exact same request with 0A line endings:

  https://br.pinterest.com/ https://www.pinterest.co.uk/
  https://apps.apple.com/ https://support.apple.com/ https://podcasts.apple.com/ https://music.apple.com/ https://geo.itunes.apple.com/
  https://ncbi.nlm.nih.gov/ https://www.salesforce.com/ https://www.purdue.edu/ https://www.playstation.com/
  https://llvm.org/ https://www.iana.org/ https://www.gnu.org/ https://epa.gov/ https://justice.gov/
  https://www.brendangregg.com/ http://heise.de/ https://www.post.ch/ http://hhs.gov/ https://oreilly.com/
  https://www.thinkgeek.com/ https://www.constantcontact.com/ https://sciencemag.org/ https://nps.gov/
  https://www.cs.mun.ca/ https://www.wipo.int/ https://www.unicode.org/ https://economictimes.indiatimes.com/
  https://science.org/ https://icann.org/ https://caniuse.com/ https://w3techs.com/ https://chrisharrison.net/
  https://www.universal-music.co.jp/ https://digiland.libero.it/ https://webaim.org/ https://webmd.com/
This URL responds with HTTP 505 on an 0A request:

  https://ed.ted.com/
These URLs don't respond on an 0A request:

  https://quora.com/
  https://www.nist.gov/
Most of these seem pretty major to me. There are other sites that are public but responded with an HTTP 403, probably because they didn't like the VPN or HTTP client I used for this test. (Also, www.apple.com is tolerant of 0A line endings, even though its other subdomains aren't, which is weird.)

You sure about this? www.pinterest.com, for instance, does not appear to care whether I 0d0a or just 0a.

My apologies, I was using a client which kept the connection alive between the 0D0A and 0A requests, which has an effect on www.pinterest.com. Rerunning the test with separate connections for 0D0A and 0A requests, www.pinterest.com and phys.org are no longer affected (I've removed the two from the list), but all other URLs are still affected.

I picked one at random --- hhs.gov --- and it too appears to work?

For what it's worth: I'm testing by piping the bytes for a bare-newline HTTP request directly into netcat.


Make sure you're contacting hhs.gov and not www.hhs.gov, the www. subdomain reacts differently.

  $ printf 'GET / HTTP/1.1\r\nHost: hhs.gov\r\n\r\n' | nc hhs.gov 80
  HTTP/1.1 302 Found
  Date: Mon, 14 Oct 2024 01:38:29 GMT
  Server: Apache
  Location: http://www.hhs.gov/web/508//
  Content-Length: 212
  Content-Type: text/html; charset=iso-8859-1
  
  <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
  <html><head>
  <title>302 Found</title>
  </head><body>
  <h1>Found</h1>
  <p>The document has moved <a href="http://www.hhs.gov/web/508//">here</a>.</p>
  </body></html>
  ^C
  $ printf 'GET / HTTP/1.1\nHost: hhs.gov\n\n' | nc hhs.gov 80
  HTTP/1.1 400 Bad Request
  Date: Mon, 14 Oct 2024 01:38:40 GMT
  Server: Apache
  Content-Length: 226
  Connection: close
  Content-Type: text/html; charset=iso-8859-1
  
  <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
  <html><head>
  <title>400 Bad Request</title>
  </head><body>
  <h1>Bad Request</h1>
  <p>Your browser sent a request that this server could not understand.<br />
  </p>
  </body></html>

Ahh, that was it, thanks.

And this whole exercise is an example of why this is a non-starter proposal (at least the "change existing implementations" part).

How much do we expect the domain owners to invest in changing an implementation that already works? Hint: it's a number smaller than epsilon.

Google might, but their volume is so high they care about the cost of individual bytes on the wire.


This exercise was about demonstrating that our security can't rely on making sure there's a carriage return in HTTP line termination, because there is no such norm. See the root of the thread, where I asked the question.

Oh, I agree it's about that too, but my point is you've already volunteered more time and resources investigating the situation than most companies would be willing to spend.

As the parent mentioned, it's security critical that every HTTP parser in the world - including every middleware, proxy, firewall, WAF - parses the headers in the same way. If you write a HTTP parser for a server application it's imperative you don't introduce random inconsistences with the standard (I can't believe I have to write this).

On the other hand, as a client, it's OK to send malformed requests, as long as you're prepared that they may fail. But it's a weird flex, legacy protocols have many warts, why die on this particular hill.


That appears to be an argument in favor of accepting bare-0ah, since as a positive statement that is the situation on the Internet today.

Wouldn't the safest thing, security-wise, to fail fast on bare 0ah?

As a web server, you may not know which intermediate proxies did the request traverse before arriving to your port. Given that request smuggling is a thing, failing fast with no further parsing on any protocol deviations seems to be the most secure thing.


I mean the safest thing would be to send an RST as soon as you see a SYN for 80/tcp.

That would have a severe downside of not letting your customers access your website.

Fast-abort on bare-0ah will still be compatible with all browsers and major http clients, thus providing extra mitigations practically for free.


Wouldn't not replying at all be the safest?

If you expect to be behind a reverse proxy that manages internal headers for you (removes them on incoming requests, and adds them based on internal criteria) then accepting bare 0x0a newlines could be a security vulnerability, as a malicious request could sneak an internal header that would not be stripped by the reverse proxy.

Only in the case the reverse proxy does not handle bare 0a newlines?

That was already motivated by Postel's Law. It's a step beyond to change what the strict form is; relying on the same to justify unilaterally transposing the form is asking too much of middlebox implementations of just about any line-oriented protocol, and possible violates Postel's Law itself by asserting the inverse.

I don't believe in Postel's Law, but I also don't believe in reverential adherence to standards documents. Make good engineering decisions on their own merits. This article is right: CRLF is dumb. You know who agrees with me about that? The IETF, in their (very old) informational RFC about the origins of CRLF in their protocols.

Yes, CRLF is dumb. Trying to justify the problem seems unnecessary, it's widely acknowledged. A productive inquiry looks at why fixing it didn't happen yet. Don't confuse that line of thought for calling for more failure.

This is unrealistic, though:

> I don't believe in Postel's Law

All the systems around us that work properly do believe in it, and they will continue to do so. No-one who writes MTAs or reverse proxies &c is gonna listen to the wolves howling at the moon for change when there's no better plan that "ram it through unilaterally". Irrespective of what any individual may believe, Postel's Law remains axiomatic in protocol design & implementation.

More constructively, it may be that line-oriented protocols will only move towards change when they can explicitly negotiate line termination preferences during the opening handshake/banner/key exchange etc, which inevitably means a protocol revision in every case and very careful consideration of when CRLF is passed through anyway (e.g. email body).


Hold on: if you do believe in Postel's Law, you agree with me: just send newlines.

> As the parent mentioned, it's security critical that every HTTP parser in the world - including every middleware, proxy, firewall, WAF - parses the headers in the same way. If you write a HTTP parser for a server application it's imperative you don't introduce random inconsistences with the standard (I can't believe I have to write this).

No it isn't, at least not critical to all those parsers. My HTTP server couln't care less if some middle boxes that people go through are less or more strict in their HTTP parsing. This only becomes a concern when you operate something like a reverse proxy AND implement security-relevant policies in that proxy.


Hrm, this is what I get for logging in to HN from my phone. It’s possible I am confusing this with one of the other exploitable HTTP/1.1 header parser alignment issues.

Maybe this was so widespread that ~everything already handles it because non-malicious stuff breaks if you don’t. In that case, my bad, but I still would like to make a general plea as an implementer for sticking strictly to specified behavior in this sort of protocols.


Gunicorn expects `\r\n` for lines (see gunicorn/http/message.py:read_line), though it's possible that every middleware that is in front of gunicorn in practice normalizes lines to avoid this issue.

Yep, tested it locally, you're right; gotta CRLF to gunicorn.

We're talking about servers and clients here. The best way to ensure things work is to adhere to an established protocol. Aside from saving a few bytes, there doesn't seem to be any good reason to deviate.

I'm saying the consistency that Filippo says our security depends on doesn't really seem to exist in the world, which hurts the persuasiveness of that particular argument in favor of consistency.

But no one expects 0ah to be sufficient. Change that expectation, and now you have to wonder if your middleware and your backend agree on whether the middleware filtered out internal-only headers.

Yeah, I'm not certain that this is a real issue. It might be? Certainly, I'm read in to things like TECL desync. I get the concern, that any disagreement in parsing policies is problematic for HTTP because of middleboxes. But I think the ship may have sailed on 0ah, and that it may be the case that you simply have to build HTTP systems to be bare-0ah-tolerant if you want your system to be resilient.

But what's bare-0ah-tolerant? Accepting _or_ ignoring bare 0ah's means you need to ensure all your moving parts agree, or you end up in the "one bit thinks this is two headers, others think it's one header".

The only situation where you don't need to know two policies match is when one of the policies rejects one of the combinations outright. Probably. Maybe.

EDIT: maybe it's better phrased as "all parts need to be bare-0ah-strict". But then it's fine if it's bare-0ah-reject; they just need to all be strict, one way or the other.


Security also doesn't exist as much as we'd like it to, which doesn't excuse making it exist even less.

Well, you can achieve the desired behavior in all situations by ignoring CR and treating any seen LF as NL.

I just don’t see why you’d not want to do that as the implementer. If there’s some way to exploit that behavior I can’t see it.


The exploit is that your request went through a proxy which followed the standard (but failed to reject the bare NL) and the client sent a header after a bare NL which you think came from the proxy but actually came from the client - such as the client's IP address in a fake X-Forwarded-For, which the proxy would have removed if it had parsed it as a header.

This attack is even worse when applied to SMTP because the attacker can forge emails that pass SPF checking, by inserting the end of one message and start of another. This can also be done in HTTP if your reverse proxy uses a single multiplexed connection to your origin server, and the attacker can make their response go to the next user and desync all responses after that.


Thanks, that was actually a very clear description of the problem!

The problem here is not to use one or the other, but to use a mix of both.


And the standard is CRLF, so you're either following the standard or using a mix.

There is very good reasons not to deviate as mismatch in various other things that can or are not on the path can affect things. Like reverse proxies, load balancers and so on.

What a weird reaction. Microsoft’s use of CRLF is an archaic pain in the ass. Taking a position that it should be deprecated isn’t radical or irresponsible — Microsoft makes gratuitous changes to things all of the time, why not this one?

Hipp is probably one of the better engineering leaders out there. His point of view carries weight because of who he is, but should be evaluated on its merits. If Microsoft got rid of this crap 30 years ago, when it was equally obsolete, we wouldn’t be having this conversation; if nobody does, our grandchildren will.


No one is talking about Microsoft and whatever it does on its platform, the parent comment is about network protocols (HTTP, SMTP and so on..).

I understand that it is tempting to blame Microsoft for \r\n proliferation, but it does not seem to be the case - the \r\n is comes from the era of teletypes and physical VT terminals. You can still see the original "NL" in action (move down only, do not go back to start of line) on any Unix system by typing "(stty raw; ls)" in a throw-away terminal.


The author of the post specifically addressed this:

“Today, CR is represented by U+000d and both LF and NL are represented by U+000a. Almost all modern machines use U+000a to mean NL exclusively. That meaning is embedded in most programming languages as the backslash escape \n. Nevertheless, a minority of machines still insist on sending a CR together with their NLs”

Who is the “minority”?

He also takes the position that the legacy behavior is fine for a tty, as it’s emulating a legacy terminal.


CRLF was the correct way to implement a new line the way we think of it now, because teletypes and typewriters considered the “return to the 0th column” and “go to the next line” to be different things that are each valid on their own.

CRLF was the standardized way to implement “go down one line and return to column zero” and they’re the only ones who implemented new lines correctly at the outset.

Blaming Microsoft now, because they like backwards compatibility above almost everything else, is misplaced and myopic.


Additionally it is also dishonest to bring Microsoft into the discussion like that. The discussion revolved around _standardized_ network protocols, which is entirely unrelated to MS-DOS text formats.

I didn't say we shouldn't get rid of it. I'm saying we shouldn't intentionally break existing protocols.

He's not arguing for deprecating it. He's arguing for just not complying and hoping for the best. He explicitly says so right in the article.

That is never the right approach. You intentionally introduce a problem you expect others to fix. All because he doesn't like 0x0d. The protocol is what it is. If you want to make more sane decisions when designing a new protocol (or an explicitly newer version of some existing one) then by all means, go for it. But intentionally breaking existing ones is not the way to go.


Took me a second to get what was going on here, but basically the idea is that you middleware might not see `C:D`, but then your application _does_ see `C:D`.

And given your application might assume your middleware does some form of access control (for example, `X-ActualUserForReal` being treated as an internal-only header), you could get around some access control stuff.

Not a bytes-alignment thing but a "header values disagreement" thing.

This is an issue if one part of your stack parses headers differently than another in general though, not limited to newlines.


I wouldn't be too worried and making personal judgements, he says the same thing you are (though I assume you disagree)

> massive SQLite fan, but this is giving me pause about using other software by the same author

Even if I wanted to contribute code to SQLite, I can't. I acknowledge the fact God doesn't exist, so he doesn't want my contributions :P


He does not want your code anyway, sqlite is public domain. this has several implications. One of which is the author wants nothing from you. Note that public domain is fundamentally different than the usual method of releasing code, which is to issue a license to distribute a copyright protected work. Putting a thing into the public domain is to renounce any ownership over the thing.

I think that the proper spirit of the thing is that if you have patches to sqlite is to just maintain them yourself. if you are especially benevolent you will put the patches in the public domain as well. and if they are any good perhaps the original author will want them.

In fact the public domain is so weird, some countries have no legal understanding of it. originally the concept was just the stance of the US federal government that because the works of the government were for the people, these works were not protected by copyright, and could be thought of as collectively owned by the people, or in the public domain. Some countries don't recognize this. everything has to be owned by someone. and sqlite was legally unable to be distributed in these countries, it would default to copyright with no license.


> but this is giving me pause about using other software by the same author

Go read the article again. I think you'll be pleasantly surprised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: