Someone should make a thing which does sentiment analysis on commit messages, and flags vitriolic or angry commits as possibly containing more typos than usual.
Edit: Does that last "ok" line mean that two other committers reviewed and OK'ed this patch as well? Three openbsd committers approved a one-line change that didn't fix a single thing? (And people wonder how the OpenSSL bug could live on for two years =) )
I'll just leave this here. http://article.gmane.org/gmane.os.openbsd.misc/211963
Essentially the code mistakenly relies on the freelist being LIFO and not being scrubbed.
By all means, call the code what you want, but separate that from the people.
(I'm not endorsing anyone's behavior here, just pointing out how low the right hand side of modern American politics has degenerated, to provide some perspective.)
However, the freelist code somehow fails to follow this convention.
The performance is significantly better, and there is a much smaller danger of running out of space due to fragmentation issues.
What matters is not to always be right, but how you correct errors.
In my opinion, heartbleed is a counterexample to your point. It's a case where making the mistake at all caused a lot of damage, no matter how quickly they patched it.
Having negative ifdefs in the first place is a pretty bad practice. This clearly shows why.
If all your IFDEFs are "positive" they have to be declared somewhere and then it's easy to "deactivate" the things you no longer want, by simply commenting a line out.
It reduces the possibility for error, and gives you a better picture of how many IFDEF conditions you are dealing with around your code.
I personally detest #ifdef's, and would rather have multiple .c files that I can choose from to include in my build. Code infected with #ifdef'itis is unreadable, difficult to test, difficult to maintain, ...
That being said, I fully endorse this commit. There seem to be good intentions behind it, and it was fixed soon after.
More importantly, the working group appears to be geared towards saying "yes" to new features, when exactly the opposite posture is needed. The Heartbeat feature is just the most obvious example, because its reference implementation was so slapdash.
The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?
As far as I can tell, there was only one person that raised the "we don't need this" objection (Nikos Mavrogiannopoulos, GnuTLS principal author). He asked for a defense of the feature on three occasions  , and received a response twice, both responses from the same person. There was no further discussion, so either his objection was addressed or he felt further discussion would be unproductive. I would be interested to know which. GnuTLS does ship with heartbeat disabled, so it's possible Nikos continued to express skepticism about the feature.
However this document was eyeballed by many security experts, who I will not name and shame here, who proofread or made comments or edits to the document, without objecting to the general premise.
It seems to me that, inherent in the nature of a working group, the specific parties who are in favor of the feature will always be better represented than the interests of a wider audience that faces the nonspecific harm that the feature might be implemented poorly and introduce a security vulnerability. Sort of a reverse-NIMBY situation if you like. The software community cannot mobilize to combat every new TLS proposal on the grounds of nonspecific harm, but everybody with a pet feature can mobilize to make sure it is standardized.
You're essentially asking the same exact question Theo de Raadt is asking, hence his extreme suspicion of IETF.
Why write a call to setsocketopt() when we can just reinvent new features?
Those parameters describe no connection made by browsers.
In other words: application-layer keepalives are only valuable to applications that already have the freedom to define their own truly application-layer keepalive anyways.
[edit: actually I was under the impression that the payload addressed response order concerns in UDP.]
In the hierarchy of sensible TLS decisions, you have, from most to least reasonable:
1. Not adding new heartbeat extensions to DTLS or TLS.
2. Adding new heartbeat extensions to DTLS only.
3. Adding the same new heartbeat extensions to DTLS and TLS.
4. Adding two different new heartbeat extensions, one for DTLS and the other TLS.
The penalty for security fails of useless features should be a slow, painful, humiliating death.
I've not read as much as I would like on the IETF's involvement in this but as I read the situation, Theo has just hurled vitriol at them for specifying a completely reasonable feature that the openSSL team implemented incredibly poorly.
It's a feature that had a sensible use case in DTLS (datagram TLS, TLS not run over TCP). It's unclear as to whether the use case was best addressed at the TLS layer, whether it could have been specified as something run alongside TLS, or whether it was something that applications that cared about path MTU could have handled for themselves.
The TLS WG actively discussed whether Heartbeat had reasonable use cases in TCP TLS. Some people agreed, some disagreed. Nobody threatened a temper tantrum if the feature was added to TCP TLS. Therefore: the extension was specified for both TCP TLS --- which does not need extra path MTU discovery, and which already has a (somewhat crappy) keepalive feature --- and DTLS.
The larger problem is in treating TLS like a grab-bag of features for corner-case applications.
I still don't agree with the OpenSSL implementation being reference spec if you were refering to that as pygy_ pointed out. Unless the IETF released that code as an institution, I would consider it the same as any other 3rd party implementation - ie. not necessarily correct. How am I to know that the RFCs author wrote it unless I go digging? Why should I trust anything they wrote which may or may not have gone through any rigorous checking?
This isn't so much directed at you, tptacek (since your pedigree is well known), as it is the others bashing the IETF for implementation flaws - bash them for what they actually did and perhaps instead of getting angsty at the powers that be, try getting involved in projects like this if you believe they are so important.
(HN feature request: soundbits.)
This whole approach to diluting ostensibly one of the most important standards is beyond unprofessional, it's an example of failures on many layers of meta.
I would really like to see a compelling alternative to keep TLS honest, perhaps with Convergence-like ideas and bare-bones level of features.
The problem is always adoption, corporate fear of change ($upportabi£it¥) and endpoint adoption, but the threat of an alternative might be enough to scare the bejebus of the WG back to task.
It was actully included in OpenSSL before the RFC was published.
Both the RFC and the code were written by the same person (who denies planting the bug intentionally).
Now, a 64kbyte payload, that's unnecessary for simply making each packet unique. That size was likely chosen to allow for the path MTU discovery aspects.
One could argue, however, that "keepalive" and "path MTU discovery" should not have been commingled, but they were.
That dynamic reminds me of the bias labeled "sympathetic point of view" as described in the essay "Why Wikipedia is not so great": "Articles tend to be whatever-centric. People point out whatever is exceptional about their home province, tiny town or bizarre hobby, without noting frankly that their home province is completely unremarkable, their tiny town is not really all that special or that their bizarre hobby is, in fact, bizarre. In other words, articles tend to a sympathetic point of view on all obscure topics or places."
That is surely not how we want decisions made about TLS.
How I compile OpenSSL. 
( export CONFIGURE_OPTS='no-hw no-rdrand no-sctp no-md4 no-mdc2 no-rc4 no-fips no-engine' \
brew install https://gist.github.com/steakknife/8228264/raw/openssl.rb )
no-md5 doesn't currently work:
"_EVP_md5", referenced from ...
enveloped content test streaming S/MIME format, 3 recipients: generation error
Its small modular documented and tested.
We would all reach further by improving polarssl than starting another library.
When after all the Snowden and RSA revelations they're not willing to get rid of the putrid influence of NSA inside of IETF, well then...not much trust left there.
1) You send a heartbeat at time t0;
2) You wait until time t1;
3) You send another heartbeat at time t2;
4) You receive a heartbeat reply at time t3;
That is the purpose of the payload, to distinguish which reply matches with which transmission.
Now, why variable sized with a max of 64k vs. say an 8-byte integer? The variable sized with max of 64k was most likely intended to support the second purpose in the RFC, path MTU discovery. To discover the path MTU, you need to be able to send a "too big packet", as well as adjust the packet size until you find the proper MTU value.
Is it really that much less efficient to do Path MTU with a different message/system/module? Why absorb this function into the OpenSSL pacakge?
I feel I am still missing something about the way this system works. Perhaps I just need to educate myself more on security and networking.
The only people who can accurately answer that are the author of the RFC/code, and the TLS committee members who discussed the changes.
From a security standpoint, it is more dangerous to commingle the two, because a bug in one side (path MTU) will also effect the other half (heartbeat). And that is exactly what happened.
> Why absorb this function into the OpenSSL pacakge?
Unknown. Path MTU discovery is supposed to be handled at a low layer in the OSI network stack abstraction (closer to the physical hardware) such that higher level layers/apps should not need to care. Putting it into TLS the protocol is a blatant layering violation.
To discover MTU you could ping, or send packets on port 80 or any other myriad of ways, but no, its not exactly within ssl so...
Rob Pike Responds - Slashdot
18 Oct 2004 ... ... but when ssh is the foundation of your security architecture, you know things aren't working as they should
The entire quote, in context:
10) Biggest problem with Unix - by akaina
Recently on the Google Labs Aptitude Test there was a question: "What's broken with Unix? How would you fix it?"
What would you have put?
Ken Thompson and I started Plan 9 as an answer to that question. The major things we saw wrong with Unix when we started talking about what would become Plan 9, back around 1985, all stemmed from the appearance of a network. As a stand-alone system, Unix was pretty good. But when you networked Unix machines together, you got a network of stand-alone systems instead of a seamless, integrated networked system. Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient.
Nothing's really changed today. The workarounds have become smoother and some of the things we can do with networks of Unix machines are pretty impressive, but when ssh is the foundation of your security architecture, you know things aren't working as they should.
From: firstname.lastname@example.org (rob pike)
Date: Mon, 1 Jan 2001 09:37:12 -0500
Subject: [9fans] Re: The problem with SSH2
My disagreement with SSH is more specific. It is a securitymonger's
plaything, so has been stuffed with every authentication and encryption
technology known, yet those that are configured when it is installed is
a random variable. Therefore both sides must negotiate like crazy to figure
how to talk, and one often finds that there is no shared language. This is
idiocy. The complexity is silly, but much worse is that there isn't at least
one guaranteed protocol for authentication and encryption that both
ends always have and can use as a fallback. I would argue that that
would always be sufficient, but I know I'm in the minority there. I do
argue that it's demonstrably necessary.
Algorithms everywhere, and not a byte to send.
By making the thing too complicated, they defeat
the very purpose of security. Difficult administration results in
incorrect or inadequate installation. There are cases when I can't
use ssh, a direct consequence.
Russ Cox chimes in
we're stuck with ssh, but let's not delude
ourselves into thinking it's a good protocol.
(i'm talking about ssh1; ssh2 looks worse.)
If the IETF is the only thing standing between us and any jackass introducing vulnerabilities into everyone's network stack we're in bigger trouble than I thought.
An RFC is authored by engineers and computer scientists in the form of a memorandum describing methods, behaviours, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either for peer review or simply to convey new concepts, information, or (occasionally) engineering humour. The IETF adopts some of the proposals published as RFCs as Internet standards.
Routing is defined as shuffling packets where they need to go unmolested - without messing with the address or port fields. It's specced in the IP RFC's. If you're mangling the traffic, you're violating router requirements.
Nearly all bitcoin pools utilize TCP keepalive to maintain connections with miners.
These are systems in which performance and DoS resistance are paramount.
I would guess that makes them identifiable?
On some connections where I have SSH keepalives disabled, I can suspend my laptop, go for a drive around town with it, come home, resume, and still have my session connected.
What is not necessary for a TCP heartbeat is a payload that would address out of order request/reply sequence (as would be necessary in UDP).
> [A] 64K Covert Channel in a critical protocol.
I'm genuinely surprised. Just as I would also be surprised if Windows 3.1 or magnetic tapes or punch cards were still in use. I thought these technologies were supplanted long ago.
Windows 3.1? I run into it occasionally in places like manufacturing where upgrade cycles are decades long (was recently at a client who's most critical machine was built in 1905). FWIW, DOS can still be found everywhere.
Magnetic tape? Really? LTO-6 is up to 2.5TB uncompressed and LTO-7 is imminent.
Punch cards? See: Scantron.
Just because there's a cool new replacement for something, doesn't mean you should jump on it, or that the old tech is now worthless.